Nov 23 06:50:15 crc systemd[1]: Starting Kubernetes Kubelet... Nov 23 06:50:15 crc restorecon[4821]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:15 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 23 06:50:16 crc restorecon[4821]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 23 06:50:16 crc kubenswrapper[5028]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:50:16 crc kubenswrapper[5028]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 23 06:50:16 crc kubenswrapper[5028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:50:16 crc kubenswrapper[5028]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:50:16 crc kubenswrapper[5028]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 23 06:50:16 crc kubenswrapper[5028]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.768685 5028 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776578 5028 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776621 5028 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776634 5028 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776646 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776656 5028 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776666 5028 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776675 5028 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776683 5028 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776691 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776699 5028 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776707 5028 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776715 5028 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776723 5028 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776730 5028 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776738 5028 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776746 5028 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776755 5028 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776762 5028 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776770 5028 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776777 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776785 5028 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776793 5028 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776800 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776808 5028 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776815 5028 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776822 5028 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776830 5028 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776838 5028 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776846 5028 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776855 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776885 5028 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776893 5028 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776901 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776909 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776919 5028 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776929 5028 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776939 5028 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776981 5028 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.776992 5028 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777002 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777011 5028 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777020 5028 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777029 5028 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777038 5028 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777046 5028 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777055 5028 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777062 5028 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777070 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777078 5028 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777086 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777093 5028 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777101 5028 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777109 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777117 5028 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777126 5028 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777134 5028 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777143 5028 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777152 5028 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777160 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777168 5028 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777176 5028 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777183 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777191 5028 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777199 5028 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777208 5028 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777215 5028 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777223 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777231 5028 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777239 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777248 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.777260 5028 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777433 5028 flags.go:64] FLAG: --address="0.0.0.0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777457 5028 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777476 5028 flags.go:64] FLAG: --anonymous-auth="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777491 5028 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777505 5028 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777515 5028 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777529 5028 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777550 5028 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777560 5028 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777569 5028 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777579 5028 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777589 5028 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777598 5028 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777608 5028 flags.go:64] FLAG: --cgroup-root="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777616 5028 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777625 5028 flags.go:64] FLAG: --client-ca-file="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777633 5028 flags.go:64] FLAG: --cloud-config="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777642 5028 flags.go:64] FLAG: --cloud-provider="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777651 5028 flags.go:64] FLAG: --cluster-dns="[]" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777663 5028 flags.go:64] FLAG: --cluster-domain="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777673 5028 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777682 5028 flags.go:64] FLAG: --config-dir="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777691 5028 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777700 5028 flags.go:64] FLAG: --container-log-max-files="5" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777711 5028 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777720 5028 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777730 5028 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777739 5028 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777749 5028 flags.go:64] FLAG: --contention-profiling="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777758 5028 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777767 5028 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777777 5028 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777786 5028 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777797 5028 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777806 5028 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777815 5028 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777824 5028 flags.go:64] FLAG: --enable-load-reader="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777833 5028 flags.go:64] FLAG: --enable-server="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777841 5028 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777852 5028 flags.go:64] FLAG: --event-burst="100" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777862 5028 flags.go:64] FLAG: --event-qps="50" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777870 5028 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777879 5028 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777888 5028 flags.go:64] FLAG: --eviction-hard="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777899 5028 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777908 5028 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777917 5028 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777928 5028 flags.go:64] FLAG: --eviction-soft="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777937 5028 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777976 5028 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777985 5028 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.777994 5028 flags.go:64] FLAG: --experimental-mounter-path="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778003 5028 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778012 5028 flags.go:64] FLAG: --fail-swap-on="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778022 5028 flags.go:64] FLAG: --feature-gates="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778033 5028 flags.go:64] FLAG: --file-check-frequency="20s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778043 5028 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778052 5028 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778061 5028 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778070 5028 flags.go:64] FLAG: --healthz-port="10248" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778079 5028 flags.go:64] FLAG: --help="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778090 5028 flags.go:64] FLAG: --hostname-override="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778098 5028 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778107 5028 flags.go:64] FLAG: --http-check-frequency="20s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778116 5028 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778125 5028 flags.go:64] FLAG: --image-credential-provider-config="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778134 5028 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778143 5028 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778152 5028 flags.go:64] FLAG: --image-service-endpoint="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778161 5028 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778170 5028 flags.go:64] FLAG: --kube-api-burst="100" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778179 5028 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778189 5028 flags.go:64] FLAG: --kube-api-qps="50" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778197 5028 flags.go:64] FLAG: --kube-reserved="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778207 5028 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778216 5028 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778225 5028 flags.go:64] FLAG: --kubelet-cgroups="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778233 5028 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778242 5028 flags.go:64] FLAG: --lock-file="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778251 5028 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778260 5028 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778269 5028 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778282 5028 flags.go:64] FLAG: --log-json-split-stream="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778293 5028 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778302 5028 flags.go:64] FLAG: --log-text-split-stream="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778310 5028 flags.go:64] FLAG: --logging-format="text" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778320 5028 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778329 5028 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778338 5028 flags.go:64] FLAG: --manifest-url="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778347 5028 flags.go:64] FLAG: --manifest-url-header="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778359 5028 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778368 5028 flags.go:64] FLAG: --max-open-files="1000000" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778379 5028 flags.go:64] FLAG: --max-pods="110" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778388 5028 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778397 5028 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778407 5028 flags.go:64] FLAG: --memory-manager-policy="None" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778416 5028 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778425 5028 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778433 5028 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778443 5028 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778462 5028 flags.go:64] FLAG: --node-status-max-images="50" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778471 5028 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778480 5028 flags.go:64] FLAG: --oom-score-adj="-999" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778489 5028 flags.go:64] FLAG: --pod-cidr="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778498 5028 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778511 5028 flags.go:64] FLAG: --pod-manifest-path="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778520 5028 flags.go:64] FLAG: --pod-max-pids="-1" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778530 5028 flags.go:64] FLAG: --pods-per-core="0" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778539 5028 flags.go:64] FLAG: --port="10250" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778548 5028 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778557 5028 flags.go:64] FLAG: --provider-id="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778566 5028 flags.go:64] FLAG: --qos-reserved="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778575 5028 flags.go:64] FLAG: --read-only-port="10255" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778584 5028 flags.go:64] FLAG: --register-node="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778593 5028 flags.go:64] FLAG: --register-schedulable="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778636 5028 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778654 5028 flags.go:64] FLAG: --registry-burst="10" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778663 5028 flags.go:64] FLAG: --registry-qps="5" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778672 5028 flags.go:64] FLAG: --reserved-cpus="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778682 5028 flags.go:64] FLAG: --reserved-memory="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778694 5028 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778703 5028 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778713 5028 flags.go:64] FLAG: --rotate-certificates="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778722 5028 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778731 5028 flags.go:64] FLAG: --runonce="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778739 5028 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778750 5028 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778760 5028 flags.go:64] FLAG: --seccomp-default="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778768 5028 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778777 5028 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778787 5028 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778796 5028 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778805 5028 flags.go:64] FLAG: --storage-driver-password="root" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778814 5028 flags.go:64] FLAG: --storage-driver-secure="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778823 5028 flags.go:64] FLAG: --storage-driver-table="stats" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778832 5028 flags.go:64] FLAG: --storage-driver-user="root" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778841 5028 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778851 5028 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778860 5028 flags.go:64] FLAG: --system-cgroups="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778869 5028 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778884 5028 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778893 5028 flags.go:64] FLAG: --tls-cert-file="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778902 5028 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778914 5028 flags.go:64] FLAG: --tls-min-version="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778923 5028 flags.go:64] FLAG: --tls-private-key-file="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778931 5028 flags.go:64] FLAG: --topology-manager-policy="none" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778940 5028 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778979 5028 flags.go:64] FLAG: --topology-manager-scope="container" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.778989 5028 flags.go:64] FLAG: --v="2" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.779000 5028 flags.go:64] FLAG: --version="false" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.779012 5028 flags.go:64] FLAG: --vmodule="" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.779022 5028 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.779032 5028 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779234 5028 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779245 5028 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779257 5028 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779266 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779275 5028 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779287 5028 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779299 5028 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779310 5028 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779320 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779330 5028 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779340 5028 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779348 5028 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779356 5028 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779363 5028 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779371 5028 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779379 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779387 5028 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779395 5028 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779402 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779410 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779417 5028 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779432 5028 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779440 5028 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779447 5028 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779455 5028 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779462 5028 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779471 5028 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779478 5028 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779489 5028 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779500 5028 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779509 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779519 5028 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779527 5028 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779535 5028 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779543 5028 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779551 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779558 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779566 5028 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779577 5028 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779585 5028 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779593 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779601 5028 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779609 5028 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779619 5028 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779628 5028 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779637 5028 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779645 5028 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779654 5028 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779662 5028 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779671 5028 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779679 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779687 5028 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779695 5028 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779706 5028 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779713 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779721 5028 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779729 5028 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779737 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779744 5028 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779752 5028 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779761 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779772 5028 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779780 5028 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779788 5028 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779795 5028 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779803 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779811 5028 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779818 5028 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779827 5028 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779834 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.779842 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.779866 5028 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.792572 5028 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.792603 5028 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792744 5028 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792756 5028 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792766 5028 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792775 5028 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792783 5028 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792791 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792800 5028 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792809 5028 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792817 5028 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792826 5028 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792834 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792841 5028 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792850 5028 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792857 5028 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792865 5028 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792876 5028 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792885 5028 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792893 5028 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792901 5028 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792909 5028 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792916 5028 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792924 5028 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792932 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792940 5028 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792975 5028 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792983 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792991 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.792998 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793006 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793014 5028 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793022 5028 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793030 5028 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793037 5028 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793045 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793055 5028 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793063 5028 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793070 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793078 5028 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793086 5028 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793094 5028 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793102 5028 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793111 5028 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793122 5028 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793131 5028 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793138 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793146 5028 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793154 5028 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793161 5028 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793169 5028 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793176 5028 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793184 5028 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793191 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793199 5028 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793207 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793217 5028 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793229 5028 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793238 5028 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793246 5028 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793253 5028 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793264 5028 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793273 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793282 5028 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793292 5028 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793300 5028 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793310 5028 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793318 5028 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793327 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793335 5028 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793344 5028 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793353 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793362 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.793375 5028 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793589 5028 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793602 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793612 5028 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793622 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793630 5028 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793639 5028 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793647 5028 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793656 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793665 5028 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793674 5028 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793681 5028 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793689 5028 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793697 5028 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793705 5028 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793712 5028 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793720 5028 feature_gate.go:330] unrecognized feature gate: Example Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793728 5028 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793735 5028 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793746 5028 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793758 5028 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793767 5028 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793775 5028 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793783 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793791 5028 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793799 5028 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793807 5028 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793814 5028 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793821 5028 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793829 5028 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793836 5028 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793844 5028 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793852 5028 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793859 5028 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793869 5028 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793878 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793887 5028 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793894 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793902 5028 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793909 5028 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793917 5028 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793924 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793932 5028 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793942 5028 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793975 5028 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793985 5028 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.793994 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794003 5028 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794012 5028 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794020 5028 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794029 5028 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794038 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794046 5028 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794054 5028 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794062 5028 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794070 5028 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794078 5028 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794088 5028 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794098 5028 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794106 5028 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794114 5028 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794122 5028 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794130 5028 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794137 5028 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794145 5028 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794153 5028 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794161 5028 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794168 5028 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794176 5028 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794184 5028 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794191 5028 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.794200 5028 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.794211 5028 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.794436 5028 server.go:940] "Client rotation is on, will bootstrap in background" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.800199 5028 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.800361 5028 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.802345 5028 server.go:997] "Starting client certificate rotation" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.802388 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.802652 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-06 20:05:19.091302847 +0000 UTC Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.802817 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.838144 5028 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:50:16 crc kubenswrapper[5028]: E1123 06:50:16.840039 5028 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.849565 5028 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.874657 5028 log.go:25] "Validated CRI v1 runtime API" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.916450 5028 log.go:25] "Validated CRI v1 image API" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.919872 5028 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.928409 5028 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-23-06-40-53-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.928460 5028 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.964169 5028 manager.go:217] Machine: {Timestamp:2025-11-23 06:50:16.959064925 +0000 UTC m=+0.656469794 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fc0a1b0a-26b0-4c3e-92d4-29192e43f43f BootID:ff061f1e-f458-4bca-a72d-af8aa57016f2 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d1:f7:e8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d1:f7:e8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:60:fc:17 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:64:e5:b8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:77:73:ec Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bd:91:95 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:f0:a3:b5 Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:53:84:95 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:f6:73:98:1f:30:57 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d2:38:ff:04:8e:f5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.964563 5028 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.964758 5028 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.966715 5028 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.967464 5028 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.967576 5028 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.969035 5028 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.969081 5028 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.970018 5028 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.970063 5028 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.970654 5028 state_mem.go:36] "Initialized new in-memory state store" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.970822 5028 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.974809 5028 kubelet.go:418] "Attempting to sync node with API server" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.974859 5028 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.974882 5028 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.974898 5028 kubelet.go:324] "Adding apiserver pod source" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.974913 5028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.978840 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:16 crc kubenswrapper[5028]: E1123 06:50:16.978990 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:16 crc kubenswrapper[5028]: W1123 06:50:16.979311 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:16 crc kubenswrapper[5028]: E1123 06:50:16.979374 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.980158 5028 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.982536 5028 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.985517 5028 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.987875 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.987921 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.987939 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.987979 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988004 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988018 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988032 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988063 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988082 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988097 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988121 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.988136 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.990740 5028 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.991660 5028 server.go:1280] "Started kubelet" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.992138 5028 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.992480 5028 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.992921 5028 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.993123 5028 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:16 crc systemd[1]: Started Kubernetes Kubelet. Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.994239 5028 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.994296 5028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.994342 5028 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:33:51.286039384 +0000 UTC Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.994384 5028 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 23 06:50:16 crc kubenswrapper[5028]: I1123 06:50:16.994813 5028 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.003105 5028 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:16.997822 5028 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.003857 5028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.005064 5028 server.go:460] "Adding debug handlers to kubelet server" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.005449 5028 factory.go:55] Registering systemd factory Nov 23 06:50:17 crc kubenswrapper[5028]: W1123 06:50:17.005517 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.005598 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.005563 5028 factory.go:221] Registration of the systemd container factory successfully Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.003852 5028 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a901361fb1ded default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-23 06:50:16.991596013 +0000 UTC m=+0.689000802,LastTimestamp:2025-11-23 06:50:16.991596013 +0000 UTC m=+0.689000802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.007573 5028 factory.go:153] Registering CRI-O factory Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.007607 5028 factory.go:221] Registration of the crio container factory successfully Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.007688 5028 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.007716 5028 factory.go:103] Registering Raw factory Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.007738 5028 manager.go:1196] Started watching for new ooms in manager Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.009086 5028 manager.go:319] Starting recovery of all containers Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017437 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017503 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017524 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017541 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017557 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017569 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.017581 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019736 5028 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019784 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019806 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019817 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019830 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019842 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019854 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019869 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019880 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019891 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019904 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019917 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019933 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019961 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019974 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.019993 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020006 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020021 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020034 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020047 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020061 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020073 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020082 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020094 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020107 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020118 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020127 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020138 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020149 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020164 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020176 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020188 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020197 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020205 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020215 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020225 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020236 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020246 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020256 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020300 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020312 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020325 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020336 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020348 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020360 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020372 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020389 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020400 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020413 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020425 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020437 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020450 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020460 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020472 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020482 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020490 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020504 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020516 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020527 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020538 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020547 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020558 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020568 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020576 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020585 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020595 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020603 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020613 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020623 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020634 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020667 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020676 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020684 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020699 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020709 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020720 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020735 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020747 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020756 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020767 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020779 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020790 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020801 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020811 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020821 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020830 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020841 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020853 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020863 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020872 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020881 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020892 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020902 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020912 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020969 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020981 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.020991 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021000 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021015 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021027 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021039 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021050 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021062 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021073 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021087 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021099 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021111 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021124 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021135 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021144 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021156 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021166 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021176 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021186 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021229 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021239 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021250 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021263 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021275 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021286 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021296 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021307 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021317 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021329 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021339 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021350 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021360 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021369 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021378 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021390 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021401 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021412 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021422 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021431 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021440 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021450 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021460 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021471 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021481 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021490 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021501 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021510 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021519 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021530 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021540 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021549 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021558 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021568 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021577 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021591 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021603 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021614 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021624 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021635 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021646 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021656 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021667 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021677 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021687 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021698 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021718 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021731 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021746 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021759 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021770 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021780 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021794 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021805 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021816 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021827 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021838 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021847 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021858 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021868 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021878 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021889 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021899 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021910 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021921 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021931 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021942 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021970 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021980 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.021990 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022000 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022010 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022020 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022030 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022040 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022050 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022061 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022071 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022083 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022094 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022104 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022114 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022126 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022137 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022147 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022158 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022168 5028 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022180 5028 reconstruct.go:97] "Volume reconstruction finished" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.022190 5028 reconciler.go:26] "Reconciler: start to sync state" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.041775 5028 manager.go:324] Recovery completed Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.049134 5028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.051647 5028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.051724 5028 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.051760 5028 kubelet.go:2335] "Starting kubelet main sync loop" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.052034 5028 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 06:50:17 crc kubenswrapper[5028]: W1123 06:50:17.052482 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.052535 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.053789 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.055334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.055368 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.055377 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.055883 5028 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.055902 5028 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.055925 5028 state_mem.go:36] "Initialized new in-memory state store" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.079827 5028 policy_none.go:49] "None policy: Start" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.081056 5028 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.081099 5028 state_mem.go:35] "Initializing new in-memory state store" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.103921 5028 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.139168 5028 manager.go:334] "Starting Device Plugin manager" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.139371 5028 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.139390 5028 server.go:79] "Starting device plugin registration server" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.139833 5028 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.139856 5028 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.140049 5028 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.140385 5028 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.140411 5028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.146973 5028 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.153128 5028 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.153203 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.155167 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.155208 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.155222 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.155376 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.155563 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.155626 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157174 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157186 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157248 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157273 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157287 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157295 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157552 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.157632 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.158983 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.159007 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.159016 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.161264 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.161300 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.161316 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.161452 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.161616 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.161675 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.162686 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.162712 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.162722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.162722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.162804 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.162820 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.163005 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.163080 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.163104 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164141 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164168 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164178 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164217 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164247 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164257 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164361 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.164392 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.165102 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.165130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.165142 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.204985 5028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224299 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224351 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224378 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224400 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224423 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224447 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224528 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224585 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224636 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224720 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224760 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224788 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224832 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224896 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.224938 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.240371 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.241543 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.241580 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.241590 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.241618 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.242072 5028 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326182 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326250 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326272 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326331 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326350 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326400 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326385 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326430 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326418 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326479 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326515 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326515 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326535 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326555 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326542 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326565 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326577 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326594 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326608 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326649 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326664 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326682 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326723 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326735 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326754 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326787 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326760 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326854 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.326773 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.443002 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.444162 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.444220 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.444232 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.444266 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.444773 5028 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.490372 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.514930 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.522691 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: W1123 06:50:17.536697 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4dfad7d4cb9b0dbe3c3249e735d91da10f39f8fa14a07b48b24546d4d3af665a WatchSource:0}: Error finding container 4dfad7d4cb9b0dbe3c3249e735d91da10f39f8fa14a07b48b24546d4d3af665a: Status 404 returned error can't find the container with id 4dfad7d4cb9b0dbe3c3249e735d91da10f39f8fa14a07b48b24546d4d3af665a Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.543174 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.548606 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:17 crc kubenswrapper[5028]: W1123 06:50:17.557867 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0a8a5ecc489aff80e6d80ba3c2ae4d7e0b748b1d3a55ffc4e0b489954682446a WatchSource:0}: Error finding container 0a8a5ecc489aff80e6d80ba3c2ae4d7e0b748b1d3a55ffc4e0b489954682446a: Status 404 returned error can't find the container with id 0a8a5ecc489aff80e6d80ba3c2ae4d7e0b748b1d3a55ffc4e0b489954682446a Nov 23 06:50:17 crc kubenswrapper[5028]: W1123 06:50:17.562356 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f3f5f9d628196ab96d0e86b0d87393319aedc5b0df6ecfe8eefc32150c75c743 WatchSource:0}: Error finding container f3f5f9d628196ab96d0e86b0d87393319aedc5b0df6ecfe8eefc32150c75c743: Status 404 returned error can't find the container with id f3f5f9d628196ab96d0e86b0d87393319aedc5b0df6ecfe8eefc32150c75c743 Nov 23 06:50:17 crc kubenswrapper[5028]: W1123 06:50:17.571881 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f3c2c54dbcc3bf4f80943b21bcd711170b9d9177b21afbf587656533690345c5 WatchSource:0}: Error finding container f3c2c54dbcc3bf4f80943b21bcd711170b9d9177b21afbf587656533690345c5: Status 404 returned error can't find the container with id f3c2c54dbcc3bf4f80943b21bcd711170b9d9177b21afbf587656533690345c5 Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.606121 5028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.845603 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.847748 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.847799 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.847811 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.847845 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:17 crc kubenswrapper[5028]: E1123 06:50:17.848514 5028 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.994200 5028 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.995296 5028 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 13:39:05.062993579 +0000 UTC Nov 23 06:50:17 crc kubenswrapper[5028]: I1123 06:50:17.995445 5028 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1326h48m47.067556842s for next certificate rotation Nov 23 06:50:18 crc kubenswrapper[5028]: W1123 06:50:18.036353 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.036435 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:18 crc kubenswrapper[5028]: W1123 06:50:18.047443 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.047610 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.057472 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f3c2c54dbcc3bf4f80943b21bcd711170b9d9177b21afbf587656533690345c5"} Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.059157 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d7000b15125c21c17556ef797aebb00da7ef8a8013b227348cce39d58f1488e"} Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.060784 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3f5f9d628196ab96d0e86b0d87393319aedc5b0df6ecfe8eefc32150c75c743"} Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.062516 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a8a5ecc489aff80e6d80ba3c2ae4d7e0b748b1d3a55ffc4e0b489954682446a"} Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.063742 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4dfad7d4cb9b0dbe3c3249e735d91da10f39f8fa14a07b48b24546d4d3af665a"} Nov 23 06:50:18 crc kubenswrapper[5028]: W1123 06:50:18.398574 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.399249 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.407774 5028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Nov 23 06:50:18 crc kubenswrapper[5028]: W1123 06:50:18.567502 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.567612 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.649231 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.650594 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.650633 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.650644 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.650673 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.651077 5028 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.984698 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:50:18 crc kubenswrapper[5028]: E1123 06:50:18.986592 5028 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:18 crc kubenswrapper[5028]: I1123 06:50:18.995119 5028 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.068547 5028 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f" exitCode=0 Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.068694 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.068810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.070358 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.070399 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.070413 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.073308 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.073347 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.073359 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.075128 5028 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470" exitCode=0 Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.075207 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.075304 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.076894 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.076999 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.077027 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.078935 5028 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a" exitCode=0 Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.079075 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.079087 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.080418 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.083106 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.083158 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.083184 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.085814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.085879 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.085902 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.087923 5028 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4" exitCode=0 Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.088020 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.088034 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4"} Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.089108 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.089146 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.089161 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:19 crc kubenswrapper[5028]: W1123 06:50:19.943515 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:19 crc kubenswrapper[5028]: E1123 06:50:19.943595 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:19 crc kubenswrapper[5028]: I1123 06:50:19.994462 5028 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:20 crc kubenswrapper[5028]: E1123 06:50:20.009352 5028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.092774 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.092852 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.093812 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.093849 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.093859 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.096667 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.096697 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.096709 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.096783 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.098002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.098033 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.098044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.099471 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.099581 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.101018 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.101044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.101053 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.103514 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.103560 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.103582 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.103639 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.105654 5028 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f" exitCode=0 Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.105697 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f"} Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.105785 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.107098 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.107123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.107132 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:20 crc kubenswrapper[5028]: W1123 06:50:20.197066 5028 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Nov 23 06:50:20 crc kubenswrapper[5028]: E1123 06:50:20.197148 5028 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.251781 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.252976 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.253015 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.253028 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:20 crc kubenswrapper[5028]: I1123 06:50:20.253053 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:20 crc kubenswrapper[5028]: E1123 06:50:20.253502 5028 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.112361 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c"} Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.112455 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.113687 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.113741 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.113758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.115792 5028 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475" exitCode=0 Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.115881 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.115908 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.115961 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.115929 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475"} Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.115914 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.117171 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118529 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118556 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118565 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118660 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118690 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118688 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118794 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118820 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118837 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.118706 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.255728 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.727156 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:21 crc kubenswrapper[5028]: I1123 06:50:21.737991 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.122569 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529"} Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.122611 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc"} Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.122623 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742"} Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.122632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16"} Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.122667 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.123045 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.123147 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.123149 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.125383 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.125391 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.125417 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.125425 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.125436 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.125441 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:22 crc kubenswrapper[5028]: I1123 06:50:22.693046 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.070710 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.130187 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af"} Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.130335 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.130419 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.130388 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.130528 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132336 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132377 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132388 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132417 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132459 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132484 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132543 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132657 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.132722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.454511 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.456692 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.456737 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.456797 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.456827 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.694343 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.797479 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.798088 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.799967 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.799998 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:23 crc kubenswrapper[5028]: I1123 06:50:23.800007 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.132660 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.132703 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.134405 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.134464 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.134486 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.134514 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.134544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.134557 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.242192 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.242787 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.244133 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.244174 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.244190 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.256539 5028 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.256606 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 06:50:24 crc kubenswrapper[5028]: I1123 06:50:24.270921 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.135332 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.136594 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.136616 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.136625 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.578577 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.578811 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.580551 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.580784 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:25 crc kubenswrapper[5028]: I1123 06:50:25.580937 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:26 crc kubenswrapper[5028]: I1123 06:50:26.733337 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 23 06:50:26 crc kubenswrapper[5028]: I1123 06:50:26.733499 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:26 crc kubenswrapper[5028]: I1123 06:50:26.736725 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:26 crc kubenswrapper[5028]: I1123 06:50:26.736769 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:26 crc kubenswrapper[5028]: I1123 06:50:26.736779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:27 crc kubenswrapper[5028]: E1123 06:50:27.147067 5028 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 23 06:50:29 crc kubenswrapper[5028]: I1123 06:50:29.898934 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:29 crc kubenswrapper[5028]: I1123 06:50:29.899267 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:29 crc kubenswrapper[5028]: I1123 06:50:29.901031 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:29 crc kubenswrapper[5028]: I1123 06:50:29.901094 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:29 crc kubenswrapper[5028]: I1123 06:50:29.901114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:30 crc kubenswrapper[5028]: I1123 06:50:30.995168 5028 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.038608 5028 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.038679 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.046375 5028 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.046439 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.152600 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.154048 5028 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c" exitCode=255 Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.154092 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c"} Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.154239 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.155155 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.155182 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.155206 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:31 crc kubenswrapper[5028]: I1123 06:50:31.155638 5028 scope.go:117] "RemoveContainer" containerID="1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c" Nov 23 06:50:32 crc kubenswrapper[5028]: I1123 06:50:32.159097 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 23 06:50:32 crc kubenswrapper[5028]: I1123 06:50:32.161622 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178"} Nov 23 06:50:32 crc kubenswrapper[5028]: I1123 06:50:32.161798 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:32 crc kubenswrapper[5028]: I1123 06:50:32.163132 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:32 crc kubenswrapper[5028]: I1123 06:50:32.163193 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:32 crc kubenswrapper[5028]: I1123 06:50:32.163213 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.243195 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.243468 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.245002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.245063 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.245081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.256262 5028 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.256370 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 06:50:34 crc kubenswrapper[5028]: I1123 06:50:34.280781 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:35 crc kubenswrapper[5028]: I1123 06:50:35.169320 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:35 crc kubenswrapper[5028]: I1123 06:50:35.171354 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:35 crc kubenswrapper[5028]: I1123 06:50:35.171410 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:35 crc kubenswrapper[5028]: I1123 06:50:35.171428 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:35 crc kubenswrapper[5028]: I1123 06:50:35.173922 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:36 crc kubenswrapper[5028]: E1123 06:50:36.017422 5028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.030653 5028 trace.go:236] Trace[1604303902]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:50:23.446) (total time: 12584ms): Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[1604303902]: ---"Objects listed" error: 12584ms (06:50:36.030) Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[1604303902]: [12.584068543s] [12.584068543s] END Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.030695 5028 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.031437 5028 trace.go:236] Trace[806878025]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:50:21.073) (total time: 14957ms): Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[806878025]: ---"Objects listed" error: 14957ms (06:50:36.031) Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[806878025]: [14.95765914s] [14.95765914s] END Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.031520 5028 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.033353 5028 trace.go:236] Trace[1185550043]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:50:21.744) (total time: 14288ms): Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[1185550043]: ---"Objects listed" error: 14288ms (06:50:36.033) Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[1185550043]: [14.288413906s] [14.288413906s] END Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.033392 5028 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 23 06:50:36 crc kubenswrapper[5028]: E1123 06:50:36.033539 5028 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.033671 5028 trace.go:236] Trace[1704627682]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Nov-2025 06:50:24.597) (total time: 11436ms): Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[1704627682]: ---"Objects listed" error: 11435ms (06:50:36.033) Nov 23 06:50:36 crc kubenswrapper[5028]: Trace[1704627682]: [11.43626656s] [11.43626656s] END Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.033712 5028 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.040283 5028 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.071578 5028 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.776671 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.794250 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 23 06:50:36 crc kubenswrapper[5028]: I1123 06:50:36.991191 5028 apiserver.go:52] "Watching apiserver" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.002478 5028 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.003030 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.003629 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.003834 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.003908 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.004022 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.004090 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.004131 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.004199 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.004254 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.004974 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.005255 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.005447 5028 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.009531 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.009814 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.009941 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.010303 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.010500 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.010648 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.011722 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.017226 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.043516 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046110 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046146 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046167 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046186 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046224 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046240 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046256 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046271 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046288 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046302 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046315 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046333 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046347 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046361 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046375 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046390 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046419 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046438 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046452 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046467 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046477 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046483 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046533 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046552 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046568 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046603 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046618 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046658 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046673 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046688 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046704 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046720 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046739 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046754 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046771 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046786 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046823 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046842 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046859 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046875 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046874 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046893 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046909 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046926 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.046940 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047026 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047040 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047054 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047069 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047110 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047124 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047140 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047154 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047168 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047184 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047201 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047216 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047231 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047246 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047263 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047278 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047292 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047326 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047349 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047365 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047382 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047396 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047410 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047425 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047440 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047484 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047501 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047519 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047534 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047551 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047568 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047585 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047601 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047615 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047630 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047644 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047663 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047677 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047711 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047726 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047741 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047758 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047772 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047789 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047809 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047824 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047846 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047873 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047904 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047932 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047974 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047999 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048046 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048073 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048095 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048135 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048159 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048181 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048204 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048227 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048249 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048270 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048293 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048315 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048337 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048359 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048383 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048408 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048432 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048456 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048477 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048497 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048518 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048537 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048558 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048579 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048598 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048620 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048637 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048652 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048668 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048683 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048698 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048716 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048733 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048753 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048771 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048787 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048803 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048819 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048836 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048852 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048868 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048884 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048903 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048989 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049007 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049027 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049051 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049083 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049109 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049135 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049162 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049186 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049209 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049231 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049255 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049278 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049304 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049329 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049350 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049373 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049398 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049422 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049448 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049470 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049492 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049515 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049540 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049566 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049591 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049616 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049640 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049664 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049687 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049706 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049726 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049748 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049770 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049793 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049819 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049845 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049872 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049896 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049918 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049941 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050112 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050139 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050162 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050184 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050208 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050232 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050260 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050283 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050310 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050331 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050351 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050375 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050398 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050420 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050480 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050517 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050540 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050560 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050583 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050606 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050625 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050643 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050663 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050682 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050700 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050717 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050736 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050754 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050772 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050827 5028 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050839 5028 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047088 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047145 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047313 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047307 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047470 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047693 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047750 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.047830 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048073 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048088 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048155 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048190 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048238 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048291 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048733 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.054568 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.054552 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.054629 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049068 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049139 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049194 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049279 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049281 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049506 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049610 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049724 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049727 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049748 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049846 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.049887 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050130 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050203 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050234 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.050294 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.052455 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.052898 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053022 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053136 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053326 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053355 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053368 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053387 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053602 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053714 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.053797 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.054017 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.054278 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.048997 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.054835 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055080 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055131 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055182 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055195 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055304 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.055454 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:50:37.555432526 +0000 UTC m=+21.252837305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055462 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055532 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055514 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055681 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055778 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055805 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.055997 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056134 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056132 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056306 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056329 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056346 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056502 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056538 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056707 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056717 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056753 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.056842 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058255 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058400 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058409 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058443 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058471 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058500 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058659 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058672 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058748 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058759 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.058889 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059037 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059168 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059033 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059235 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059279 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059331 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059512 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059588 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059780 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059984 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059981 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.059911 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060143 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060208 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060422 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060609 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060697 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060856 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.060870 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061201 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061206 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061217 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061736 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061787 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061838 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.061889 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.062019 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.062062 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.062212 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.062387 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.062476 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.062572 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.063198 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.063576 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.063650 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.063797 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064008 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064180 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064183 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064515 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064570 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064770 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064803 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064882 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.064932 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.065038 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.065712 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.065970 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066136 5028 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066376 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066407 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066504 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066842 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066926 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.067334 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.066009 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.067660 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.067852 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068024 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068026 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068202 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068239 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068353 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068431 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068585 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068850 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068923 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068974 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069052 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069168 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069209 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069214 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069229 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.068790 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069234 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069273 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069252 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069341 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069178 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069350 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069451 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.069592 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069614 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.069667 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.070089 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.070147 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.070324 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.070185 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:37.570165029 +0000 UTC m=+21.267569808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.070430 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.070721 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.070933 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:37.570917608 +0000 UTC m=+21.268322387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.072026 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.072179 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.076393 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.076712 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.076911 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.077199 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.077443 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.077533 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.077756 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.078017 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.078020 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.078023 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.078477 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.078837 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.079578 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.080314 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.081663 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.083415 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.083757 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.084795 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.085137 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.085138 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.085476 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.085496 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.085546 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.085731 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.086026 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.087087 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.087789 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.087992 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.088822 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.089082 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.089485 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089571 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089609 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089625 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089679 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:37.58966017 +0000 UTC m=+21.287065029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089573 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089724 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089736 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.089765 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:37.589756263 +0000 UTC m=+21.287161032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.091118 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.096110 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.096832 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.096896 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.099251 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.100275 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.101686 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.102980 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.103439 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.104738 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.105482 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.106934 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.108044 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.111260 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.112108 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.112819 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.113488 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.114205 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.114288 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.114907 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.116092 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.116668 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.116716 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.117203 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.118002 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.118442 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.119092 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.119932 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.120495 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.121342 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.121792 5028 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.121890 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.123886 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.124411 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.124808 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.126340 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.129553 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.130387 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.130412 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.132114 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.133019 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.133576 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.134865 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.136099 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.136699 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.137588 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.138154 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.139118 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.139850 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.140718 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.141184 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.141656 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.142544 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.143149 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.144167 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.145129 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.151235 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.151331 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.151486 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.151710 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152490 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152522 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152534 5028 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152544 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152556 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152565 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152574 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152584 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152595 5028 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152603 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152614 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152624 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152634 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152644 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152654 5028 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152663 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152671 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152679 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152688 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152697 5028 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152705 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152714 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152723 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152732 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152740 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152750 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152758 5028 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152767 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152776 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152784 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152792 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152800 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152808 5028 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152816 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152825 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152836 5028 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152844 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152852 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152860 5028 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152869 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152877 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152885 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152894 5028 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152902 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152911 5028 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152920 5028 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152928 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152937 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152964 5028 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152973 5028 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152981 5028 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152989 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.152997 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153005 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153014 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153022 5028 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153031 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153040 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153049 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153058 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153068 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153079 5028 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153088 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153098 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153106 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153115 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153123 5028 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153133 5028 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153141 5028 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153150 5028 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153158 5028 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153167 5028 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153175 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153183 5028 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153851 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153862 5028 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153871 5028 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153880 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153888 5028 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153897 5028 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153905 5028 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153914 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153922 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153930 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153938 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153961 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.153971 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154000 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154010 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154019 5028 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154028 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154037 5028 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154045 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154054 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154062 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154070 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154078 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154086 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154094 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154102 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154110 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154118 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154127 5028 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154135 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154143 5028 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154151 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154159 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154168 5028 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154177 5028 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154185 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154193 5028 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154200 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154208 5028 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154216 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154224 5028 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154231 5028 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154239 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154247 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154255 5028 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154263 5028 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154476 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154485 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154494 5028 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154504 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154511 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154519 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154527 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154535 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154545 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154553 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154562 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154571 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154580 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154588 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154596 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154604 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154612 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154620 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154627 5028 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154635 5028 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154665 5028 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154674 5028 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154682 5028 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154690 5028 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154698 5028 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154705 5028 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154713 5028 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154743 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154753 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154761 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154768 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154777 5028 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154791 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154807 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154819 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154830 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154840 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154849 5028 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154857 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154865 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154873 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154883 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154891 5028 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154899 5028 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154906 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154914 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154921 5028 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154930 5028 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154939 5028 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154968 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154976 5028 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.154984 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155465 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155477 5028 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155485 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155493 5028 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155501 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155509 5028 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155516 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155525 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155554 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155566 5028 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155580 5028 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155590 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155599 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155607 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155615 5028 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155624 5028 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155631 5028 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155638 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155646 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.155654 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.166248 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.177910 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.182394 5028 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.188625 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.198224 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.207620 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.217348 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.226561 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.234480 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.244352 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.253371 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.267702 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.277059 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.292109 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.302563 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.319873 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.328727 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 23 06:50:37 crc kubenswrapper[5028]: W1123 06:50:37.331209 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-cf33fc075ceee2c40b76872f9deb0443e0fa06edd84ee7d9ecc334a53cd585ed WatchSource:0}: Error finding container cf33fc075ceee2c40b76872f9deb0443e0fa06edd84ee7d9ecc334a53cd585ed: Status 404 returned error can't find the container with id cf33fc075ceee2c40b76872f9deb0443e0fa06edd84ee7d9ecc334a53cd585ed Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.333148 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 23 06:50:37 crc kubenswrapper[5028]: W1123 06:50:37.339599 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4fb8fb2b2c8013799fad040558ec666977f7df60d1b033e0b93a23ffbafcf7e0 WatchSource:0}: Error finding container 4fb8fb2b2c8013799fad040558ec666977f7df60d1b033e0b93a23ffbafcf7e0: Status 404 returned error can't find the container with id 4fb8fb2b2c8013799fad040558ec666977f7df60d1b033e0b93a23ffbafcf7e0 Nov 23 06:50:37 crc kubenswrapper[5028]: W1123 06:50:37.348935 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f15c39e91c722cf33444ff84a4817922778420c71369ccd231a98cb124d4f67b WatchSource:0}: Error finding container f15c39e91c722cf33444ff84a4817922778420c71369ccd231a98cb124d4f67b: Status 404 returned error can't find the container with id f15c39e91c722cf33444ff84a4817922778420c71369ccd231a98cb124d4f67b Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.559353 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.559492 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:50:38.559475331 +0000 UTC m=+22.256880110 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.660467 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.660512 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.660533 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:37 crc kubenswrapper[5028]: I1123 06:50:37.660557 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660629 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660689 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:38.660672808 +0000 UTC m=+22.358077587 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660712 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660751 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660761 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660800 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660858 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:38.660841132 +0000 UTC m=+22.358245911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660878 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:38.660868983 +0000 UTC m=+22.358273762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660923 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660939 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.660981 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:37 crc kubenswrapper[5028]: E1123 06:50:37.661034 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:38.661015396 +0000 UTC m=+22.358420195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.052431 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.052632 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.178437 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee"} Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.178500 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a"} Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.178515 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f15c39e91c722cf33444ff84a4817922778420c71369ccd231a98cb124d4f67b"} Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.179981 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4fb8fb2b2c8013799fad040558ec666977f7df60d1b033e0b93a23ffbafcf7e0"} Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.181583 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374"} Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.181656 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cf33fc075ceee2c40b76872f9deb0443e0fa06edd84ee7d9ecc334a53cd585ed"} Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.199467 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.216539 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.239115 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.261197 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.277152 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.291050 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.312258 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.331712 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.355645 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.372193 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.384323 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.398770 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.412586 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.425153 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.441697 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.457088 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:38Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.569300 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.569540 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:50:40.569504649 +0000 UTC m=+24.266909428 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.669938 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.670002 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.670020 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:38 crc kubenswrapper[5028]: I1123 06:50:38.670041 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670101 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670144 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:40.670131742 +0000 UTC m=+24.367536521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670293 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670309 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670366 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670389 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670428 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670443 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670395 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670375 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:40.670353987 +0000 UTC m=+24.367758796 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670579 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:40.670544322 +0000 UTC m=+24.367949101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:38 crc kubenswrapper[5028]: E1123 06:50:38.670604 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:40.670595053 +0000 UTC m=+24.367999832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:39 crc kubenswrapper[5028]: I1123 06:50:39.052169 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:39 crc kubenswrapper[5028]: I1123 06:50:39.052230 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:39 crc kubenswrapper[5028]: E1123 06:50:39.052426 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:39 crc kubenswrapper[5028]: E1123 06:50:39.052514 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:39 crc kubenswrapper[5028]: I1123 06:50:39.056072 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.052299 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.052471 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.188873 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781"} Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.210170 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.231745 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.246675 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.266450 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.281627 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.298995 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.316896 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.333767 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:40Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.586819 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.587032 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:50:44.586996502 +0000 UTC m=+28.284401331 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.688111 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.688161 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.688182 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:40 crc kubenswrapper[5028]: I1123 06:50:40.688199 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688266 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688491 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688552 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688579 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688316 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688664 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688683 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688496 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688606 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:44.688578078 +0000 UTC m=+28.385982897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688919 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:44.688830424 +0000 UTC m=+28.386235243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.688994 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:44.688976188 +0000 UTC m=+28.386381007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:40 crc kubenswrapper[5028]: E1123 06:50:40.689038 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:44.689016469 +0000 UTC m=+28.386421488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.052353 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:41 crc kubenswrapper[5028]: E1123 06:50:41.052618 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.052678 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:41 crc kubenswrapper[5028]: E1123 06:50:41.052830 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.261067 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.264847 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.270231 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.274418 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.289199 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.306210 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.322560 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.350739 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.368970 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.382369 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.396359 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.412145 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.427478 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.443211 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.460457 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.477649 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.490134 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.513980 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.528717 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.547239 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.882605 5028 csr.go:261] certificate signing request csr-smx7t is approved, waiting to be issued Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.900332 5028 csr.go:257] certificate signing request csr-smx7t is issued Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.914165 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-678pf"] Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.914451 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.917609 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.917838 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.918185 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.931100 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.944486 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.964672 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.979045 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:41Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.998567 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb22n\" (UniqueName: \"kubernetes.io/projected/34c0e27d-8812-4054-83c4-eca66db0655e-kube-api-access-zb22n\") pod \"node-resolver-678pf\" (UID: \"34c0e27d-8812-4054-83c4-eca66db0655e\") " pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:41 crc kubenswrapper[5028]: I1123 06:50:41.998644 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34c0e27d-8812-4054-83c4-eca66db0655e-hosts-file\") pod \"node-resolver-678pf\" (UID: \"34c0e27d-8812-4054-83c4-eca66db0655e\") " pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.003121 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.020191 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.033650 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.045173 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.052197 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.052350 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.055208 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.071027 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.099486 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34c0e27d-8812-4054-83c4-eca66db0655e-hosts-file\") pod \"node-resolver-678pf\" (UID: \"34c0e27d-8812-4054-83c4-eca66db0655e\") " pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.099573 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb22n\" (UniqueName: \"kubernetes.io/projected/34c0e27d-8812-4054-83c4-eca66db0655e-kube-api-access-zb22n\") pod \"node-resolver-678pf\" (UID: \"34c0e27d-8812-4054-83c4-eca66db0655e\") " pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.099713 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34c0e27d-8812-4054-83c4-eca66db0655e-hosts-file\") pod \"node-resolver-678pf\" (UID: \"34c0e27d-8812-4054-83c4-eca66db0655e\") " pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.118062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb22n\" (UniqueName: \"kubernetes.io/projected/34c0e27d-8812-4054-83c4-eca66db0655e-kube-api-access-zb22n\") pod \"node-resolver-678pf\" (UID: \"34c0e27d-8812-4054-83c4-eca66db0655e\") " pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.226480 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-678pf" Nov 23 06:50:42 crc kubenswrapper[5028]: W1123 06:50:42.235785 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34c0e27d_8812_4054_83c4_eca66db0655e.slice/crio-2a77d630800681313a775f908ded93d2f2c544882f56a2bbc258da5fbbcf4a0c WatchSource:0}: Error finding container 2a77d630800681313a775f908ded93d2f2c544882f56a2bbc258da5fbbcf4a0c: Status 404 returned error can't find the container with id 2a77d630800681313a775f908ded93d2f2c544882f56a2bbc258da5fbbcf4a0c Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.434061 5028 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.435574 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.435599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.435607 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.435657 5028 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.445965 5028 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.446052 5028 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.447027 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.447047 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.447056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.447070 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.447080 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.465356 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.469046 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.469080 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.469092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.469109 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.469121 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.485258 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.488221 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.488255 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.488267 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.488283 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.488296 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.500207 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.503081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.503109 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.503119 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.503131 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.503147 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.514321 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.517907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.517990 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.518002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.518024 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.518035 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.530251 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: E1123 06:50:42.530388 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.532279 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.532334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.532346 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.532367 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.532380 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.634232 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.634277 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.634289 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.634305 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.634317 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.736368 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.736402 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.736413 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.736430 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.736440 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.763978 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-m2sl7"] Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.764312 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.766210 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.766230 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.766272 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.767100 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xbtxp"] Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.767777 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-7l9fm"] Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.768038 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.768628 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-th92p"] Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.768975 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.768989 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.769371 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.769694 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.770498 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.770522 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.770833 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771102 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771182 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771283 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771335 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771336 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771283 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771628 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771677 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.771705 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.772011 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.776009 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.778436 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.791390 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.802686 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.826469 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.838849 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.838882 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.838894 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.838909 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.838919 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.842538 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.856117 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.869733 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.887499 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.899412 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.901438 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-23 06:45:41 +0000 UTC, rotation deadline is 2026-09-24 21:13:12.416329759 +0000 UTC Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.901492 5028 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7334h22m29.514840188s for next certificate rotation Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905289 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-cnibin\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905317 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-netns\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905341 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdsqn\" (UniqueName: \"kubernetes.io/projected/68dc0fb8-309c-46ef-a4f8-f0eff3169061-kube-api-access-xdsqn\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905366 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-log-socket\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905384 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-script-lib\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905447 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whssm\" (UniqueName: \"kubernetes.io/projected/e634c65f-8585-4d5d-b929-b9e1255f8921-kube-api-access-whssm\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905516 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905549 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905571 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-os-release\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905590 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e634c65f-8585-4d5d-b929-b9e1255f8921-cni-binary-copy\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905611 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-k8s-cni-cncf-io\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905672 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-hostroot\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905712 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-daemon-config\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905736 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-conf-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905759 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-kubelet\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905779 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-systemd-units\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905797 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-kubelet\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905870 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-etc-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905892 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905913 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa1c051a-31cd-4dd3-9be8-6194822c2273-mcd-auth-proxy-config\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905933 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cnibin\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.905986 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-netns\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906021 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-netd\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906055 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-config\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906088 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvg8v\" (UniqueName: \"kubernetes.io/projected/aa1c051a-31cd-4dd3-9be8-6194822c2273-kube-api-access-hvg8v\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906110 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-var-lib-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906126 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cni-binary-copy\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906142 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-cni-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906168 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-slash\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906185 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906204 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-socket-dir-parent\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906228 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-cni-multus\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906272 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-cni-bin\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906312 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-ovn\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906338 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa1c051a-31cd-4dd3-9be8-6194822c2273-proxy-tls\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906361 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-system-cni-dir\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906384 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rmx9\" (UniqueName: \"kubernetes.io/projected/5609ffb8-6ac2-4716-8c08-c466b3dd987b-kube-api-access-7rmx9\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906422 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-etc-kubernetes\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906489 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-node-log\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906523 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-ovn-kubernetes\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906555 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-env-overrides\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906590 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-bin\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906633 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-os-release\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906668 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovn-node-metrics-cert\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906693 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-system-cni-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906716 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-systemd\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906737 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aa1c051a-31cd-4dd3-9be8-6194822c2273-rootfs\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.906766 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-multus-certs\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.916290 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.928785 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.942132 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.942180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.942193 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.942211 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.942230 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:42Z","lastTransitionTime":"2025-11-23T06:50:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.951580 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.972282 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:42 crc kubenswrapper[5028]: I1123 06:50:42.989838 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:42Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.002395 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007676 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-slash\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007713 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007735 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-socket-dir-parent\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007754 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-cni-multus\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007757 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-slash\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007772 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-cni-bin\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007822 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-cni-multus\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007840 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-ovn\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007854 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007869 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa1c051a-31cd-4dd3-9be8-6194822c2273-proxy-tls\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007804 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-cni-bin\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007896 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-system-cni-dir\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007918 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-etc-kubernetes\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007921 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-ovn\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007960 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-etc-kubernetes\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007971 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-node-log\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007971 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-system-cni-dir\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.007994 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-ovn-kubernetes\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008003 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-socket-dir-parent\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008044 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-node-log\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008027 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-env-overrides\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rmx9\" (UniqueName: \"kubernetes.io/projected/5609ffb8-6ac2-4716-8c08-c466b3dd987b-kube-api-access-7rmx9\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008023 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-ovn-kubernetes\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008104 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-bin\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008150 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-os-release\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008154 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-bin\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008175 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovn-node-metrics-cert\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008193 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-system-cni-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008211 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-systemd\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008269 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aa1c051a-31cd-4dd3-9be8-6194822c2273-rootfs\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008286 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-multus-certs\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008302 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-netns\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008318 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-system-cni-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008336 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdsqn\" (UniqueName: \"kubernetes.io/projected/68dc0fb8-309c-46ef-a4f8-f0eff3169061-kube-api-access-xdsqn\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008349 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/aa1c051a-31cd-4dd3-9be8-6194822c2273-rootfs\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008369 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-cnibin\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008379 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-systemd\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008399 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-log-socket\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008409 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-netns\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008417 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-cnibin\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008432 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-script-lib\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008468 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008450 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-multus-certs\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008516 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-env-overrides\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008481 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-log-socket\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008558 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-os-release\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008576 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e634c65f-8585-4d5d-b929-b9e1255f8921-cni-binary-copy\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-k8s-cni-cncf-io\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008612 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whssm\" (UniqueName: \"kubernetes.io/projected/e634c65f-8585-4d5d-b929-b9e1255f8921-kube-api-access-whssm\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008613 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-os-release\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008635 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-hostroot\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008653 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-daemon-config\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008669 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-kubelet\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008672 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-run-k8s-cni-cncf-io\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008685 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-systemd-units\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008706 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-systemd-units\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008713 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-kubelet\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008771 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-kubelet\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008705 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-hostroot\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008797 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-conf-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008822 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-host-var-lib-kubelet\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008864 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-conf-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008938 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-etc-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008985 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.008988 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009005 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa1c051a-31cd-4dd3-9be8-6194822c2273-mcd-auth-proxy-config\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009023 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-etc-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009030 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cnibin\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009037 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009051 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-netns\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009066 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cnibin\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009073 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-netd\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009088 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-netns\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009094 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-config\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009111 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-netd\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009118 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvg8v\" (UniqueName: \"kubernetes.io/projected/aa1c051a-31cd-4dd3-9be8-6194822c2273-kube-api-access-hvg8v\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009141 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-var-lib-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009162 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cni-binary-copy\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009181 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-cni-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009222 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-var-lib-openvswitch\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009094 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-script-lib\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009358 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-cni-dir\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009388 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e634c65f-8585-4d5d-b929-b9e1255f8921-multus-daemon-config\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009428 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009570 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aa1c051a-31cd-4dd3-9be8-6194822c2273-mcd-auth-proxy-config\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009641 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e634c65f-8585-4d5d-b929-b9e1255f8921-cni-binary-copy\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009687 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-config\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009741 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5609ffb8-6ac2-4716-8c08-c466b3dd987b-os-release\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.009747 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5609ffb8-6ac2-4716-8c08-c466b3dd987b-cni-binary-copy\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.013075 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovn-node-metrics-cert\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.013120 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa1c051a-31cd-4dd3-9be8-6194822c2273-proxy-tls\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.015127 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.024574 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rmx9\" (UniqueName: \"kubernetes.io/projected/5609ffb8-6ac2-4716-8c08-c466b3dd987b-kube-api-access-7rmx9\") pod \"multus-additional-cni-plugins-7l9fm\" (UID: \"5609ffb8-6ac2-4716-8c08-c466b3dd987b\") " pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.025175 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.026605 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdsqn\" (UniqueName: \"kubernetes.io/projected/68dc0fb8-309c-46ef-a4f8-f0eff3169061-kube-api-access-xdsqn\") pod \"ovnkube-node-xbtxp\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.029940 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whssm\" (UniqueName: \"kubernetes.io/projected/e634c65f-8585-4d5d-b929-b9e1255f8921-kube-api-access-whssm\") pod \"multus-m2sl7\" (UID: \"e634c65f-8585-4d5d-b929-b9e1255f8921\") " pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.031334 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvg8v\" (UniqueName: \"kubernetes.io/projected/aa1c051a-31cd-4dd3-9be8-6194822c2273-kube-api-access-hvg8v\") pod \"machine-config-daemon-th92p\" (UID: \"aa1c051a-31cd-4dd3-9be8-6194822c2273\") " pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.035430 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.044691 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.044957 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.045071 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.045171 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.045255 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.045924 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.052247 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.052280 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:43 crc kubenswrapper[5028]: E1123 06:50:43.052426 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:43 crc kubenswrapper[5028]: E1123 06:50:43.052523 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.057941 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.073802 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.076446 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-m2sl7" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.083431 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:43 crc kubenswrapper[5028]: W1123 06:50:43.087133 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode634c65f_8585_4d5d_b929_b9e1255f8921.slice/crio-b40a124884cee890fd0fa2cfee97c65b7e55608a7a94e2d3b3c325da1113419d WatchSource:0}: Error finding container b40a124884cee890fd0fa2cfee97c65b7e55608a7a94e2d3b3c325da1113419d: Status 404 returned error can't find the container with id b40a124884cee890fd0fa2cfee97c65b7e55608a7a94e2d3b3c325da1113419d Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.087222 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.089450 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.095048 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:50:43 crc kubenswrapper[5028]: W1123 06:50:43.095264 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68dc0fb8_309c_46ef_a4f8_f0eff3169061.slice/crio-a13fd0e49be3314d313ae5f386826636968ec7b14dd39fd80ce239279a41dda3 WatchSource:0}: Error finding container a13fd0e49be3314d313ae5f386826636968ec7b14dd39fd80ce239279a41dda3: Status 404 returned error can't find the container with id a13fd0e49be3314d313ae5f386826636968ec7b14dd39fd80ce239279a41dda3 Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.101471 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: W1123 06:50:43.110157 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5609ffb8_6ac2_4716_8c08_c466b3dd987b.slice/crio-b4bde23c18b2cafba037bcbf4efa781eab4be8e563e455630b6735653b96b5a4 WatchSource:0}: Error finding container b4bde23c18b2cafba037bcbf4efa781eab4be8e563e455630b6735653b96b5a4: Status 404 returned error can't find the container with id b4bde23c18b2cafba037bcbf4efa781eab4be8e563e455630b6735653b96b5a4 Nov 23 06:50:43 crc kubenswrapper[5028]: W1123 06:50:43.111909 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa1c051a_31cd_4dd3_9be8_6194822c2273.slice/crio-32984407bab340c4766364f0b4e9951961b23b2be3d7f3dd75ddaf380e5dc18a WatchSource:0}: Error finding container 32984407bab340c4766364f0b4e9951961b23b2be3d7f3dd75ddaf380e5dc18a: Status 404 returned error can't find the container with id 32984407bab340c4766364f0b4e9951961b23b2be3d7f3dd75ddaf380e5dc18a Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.112628 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.123366 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.149471 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.149649 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.149663 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.149871 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.149886 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.197282 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"32984407bab340c4766364f0b4e9951961b23b2be3d7f3dd75ddaf380e5dc18a"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.198979 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerStarted","Data":"b4bde23c18b2cafba037bcbf4efa781eab4be8e563e455630b6735653b96b5a4"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.200017 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"a13fd0e49be3314d313ae5f386826636968ec7b14dd39fd80ce239279a41dda3"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.201204 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerStarted","Data":"b40a124884cee890fd0fa2cfee97c65b7e55608a7a94e2d3b3c325da1113419d"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.203028 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-678pf" event={"ID":"34c0e27d-8812-4054-83c4-eca66db0655e","Type":"ContainerStarted","Data":"d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.203088 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-678pf" event={"ID":"34c0e27d-8812-4054-83c4-eca66db0655e","Type":"ContainerStarted","Data":"2a77d630800681313a775f908ded93d2f2c544882f56a2bbc258da5fbbcf4a0c"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.226046 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.238334 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.252217 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.252382 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.252421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.252429 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.252445 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.252454 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.264320 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.277249 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.309357 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.332941 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.347661 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.354810 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.354846 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.354855 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.354869 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.354879 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.360076 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.372021 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.383672 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.393468 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.407071 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.424219 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.456892 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.456926 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.456935 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.456964 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.456973 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.559600 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.559639 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.559648 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.559663 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.559673 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.662421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.662459 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.662469 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.662486 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.662495 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.765744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.765782 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.765792 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.765810 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.765822 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.868914 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.869014 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.869028 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.869046 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.869348 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.972206 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.972268 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.972278 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.972294 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:43 crc kubenswrapper[5028]: I1123 06:50:43.972304 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:43Z","lastTransitionTime":"2025-11-23T06:50:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.052406 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.052581 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.074621 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.074665 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.074677 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.074694 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.074707 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.177588 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.177629 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.177639 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.177656 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.177665 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.209344 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" exitCode=0 Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.209435 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.211120 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerStarted","Data":"34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.213149 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.213196 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.215159 5028 generic.go:334] "Generic (PLEG): container finished" podID="5609ffb8-6ac2-4716-8c08-c466b3dd987b" containerID="e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea" exitCode=0 Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.215199 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerDied","Data":"e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.224833 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.237206 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-w2dj6"] Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.238186 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.240033 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.240332 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.241290 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.242097 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.242133 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.250153 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.263342 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.282804 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.287699 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.287735 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.287746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.287772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.287784 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.301695 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.318440 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.329722 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ab6d996-d9c7-42c7-8d70-00f3575144b2-serviceca\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.329764 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ab6d996-d9c7-42c7-8d70-00f3575144b2-host\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.329817 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbtrk\" (UniqueName: \"kubernetes.io/projected/9ab6d996-d9c7-42c7-8d70-00f3575144b2-kube-api-access-qbtrk\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.340012 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.366640 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.381603 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.392546 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.392623 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.392634 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.392650 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.392675 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.396333 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.409666 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.423092 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.430538 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ab6d996-d9c7-42c7-8d70-00f3575144b2-host\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.430573 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ab6d996-d9c7-42c7-8d70-00f3575144b2-serviceca\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.430597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbtrk\" (UniqueName: \"kubernetes.io/projected/9ab6d996-d9c7-42c7-8d70-00f3575144b2-kube-api-access-qbtrk\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.430664 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ab6d996-d9c7-42c7-8d70-00f3575144b2-host\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.431586 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9ab6d996-d9c7-42c7-8d70-00f3575144b2-serviceca\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.437368 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.447376 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbtrk\" (UniqueName: \"kubernetes.io/projected/9ab6d996-d9c7-42c7-8d70-00f3575144b2-kube-api-access-qbtrk\") pod \"node-ca-w2dj6\" (UID: \"9ab6d996-d9c7-42c7-8d70-00f3575144b2\") " pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.452437 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.477534 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.490253 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.494899 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.494927 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.494936 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.494973 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.494986 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.503430 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.516217 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.528848 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.542197 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.556779 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.569652 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.580293 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-w2dj6" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.581865 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.595990 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.597089 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.597125 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.597135 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.597148 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.597156 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.613916 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.631746 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.632226 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.632370 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:50:52.632343303 +0000 UTC m=+36.329748082 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.644836 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.666091 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.680493 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.699042 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.699071 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.699081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.699094 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.699103 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.733739 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.733789 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.733819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.733848 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734028 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734099 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:52.734080503 +0000 UTC m=+36.431485282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734114 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734133 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734172 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734217 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734231 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734196 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:52.734175695 +0000 UTC m=+36.431580474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734187 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734302 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:52.734282808 +0000 UTC m=+36.431687587 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734322 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:44 crc kubenswrapper[5028]: E1123 06:50:44.734414 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:50:52.73438653 +0000 UTC m=+36.431791319 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.802143 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.802208 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.802225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.802254 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.802271 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.905403 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.905467 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.905486 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.905509 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:44 crc kubenswrapper[5028]: I1123 06:50:44.905526 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:44Z","lastTransitionTime":"2025-11-23T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.008767 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.009198 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.009209 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.009225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.009235 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.052382 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.052395 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:45 crc kubenswrapper[5028]: E1123 06:50:45.052606 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:45 crc kubenswrapper[5028]: E1123 06:50:45.052677 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.111259 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.111294 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.111302 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.111315 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.111324 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.213271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.213306 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.213314 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.213328 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.213341 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.219681 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w2dj6" event={"ID":"9ab6d996-d9c7-42c7-8d70-00f3575144b2","Type":"ContainerStarted","Data":"63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.219736 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-w2dj6" event={"ID":"9ab6d996-d9c7-42c7-8d70-00f3575144b2","Type":"ContainerStarted","Data":"b9f47873351ede0460dc2fb4f18c5287e12dd3ec2a7e3daa2bb241e16226d82e"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.222837 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.222867 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.222879 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.222890 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.224962 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerStarted","Data":"32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.239358 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.250158 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.272062 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.284028 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.299450 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.314785 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.315242 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.315290 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.315308 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.315335 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.315351 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.326001 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.337368 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.352729 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.366801 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.386568 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.400317 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.413720 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.417543 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.417689 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.417822 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.418123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.418353 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.425342 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.435592 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.447493 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.458223 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.469862 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.480796 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.492328 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.505617 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.517350 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.520632 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.520663 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.520673 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.520696 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.520706 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.535689 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.554796 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.579066 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.600129 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.619558 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.623078 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.623119 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.623132 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.623148 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.623157 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.632039 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.645025 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.655700 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.725934 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.726002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.726011 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.726030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.726042 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.829617 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.829671 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.829683 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.829701 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.829711 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.931762 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.931814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.931829 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.931847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:45 crc kubenswrapper[5028]: I1123 06:50:45.931860 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:45Z","lastTransitionTime":"2025-11-23T06:50:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.034528 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.034569 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.034580 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.034599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.034609 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.052316 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:46 crc kubenswrapper[5028]: E1123 06:50:46.052447 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.137452 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.137492 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.137501 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.137517 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.137527 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.229390 5028 generic.go:334] "Generic (PLEG): container finished" podID="5609ffb8-6ac2-4716-8c08-c466b3dd987b" containerID="32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db" exitCode=0 Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.229485 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerDied","Data":"32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.233280 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.233345 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.242865 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.242905 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.242916 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.242940 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.243100 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.246276 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.260525 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.277513 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.305152 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.323595 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.336070 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.348011 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.348057 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.348066 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.348084 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.348096 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.348619 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.361722 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.371680 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.391118 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.402705 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.415826 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.427568 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.438709 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.450217 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:46Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.450553 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.450581 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.450591 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.450606 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.450615 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.552835 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.552918 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.552929 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.552959 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.552972 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.655537 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.655575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.655584 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.655601 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.655612 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.757293 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.757337 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.757348 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.757365 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.757377 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.803169 5028 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.859609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.859645 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.859655 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.859672 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.859684 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.963137 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.963180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.963189 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.963209 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:46 crc kubenswrapper[5028]: I1123 06:50:46.963219 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:46Z","lastTransitionTime":"2025-11-23T06:50:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.052347 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.052363 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:47 crc kubenswrapper[5028]: E1123 06:50:47.052504 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:47 crc kubenswrapper[5028]: E1123 06:50:47.052636 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.065518 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.065557 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.065566 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.065583 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.065593 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.073077 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.085545 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.099768 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.110800 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.123894 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.137833 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.160447 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.167945 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.167997 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.168010 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.168026 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.168036 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.174013 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.187324 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.201814 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.216341 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.230096 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.239309 5028 generic.go:334] "Generic (PLEG): container finished" podID="5609ffb8-6ac2-4716-8c08-c466b3dd987b" containerID="763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13" exitCode=0 Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.239373 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerDied","Data":"763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.242995 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.275451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.275517 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.275534 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.275923 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.275982 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.303250 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.337775 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.357747 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.370466 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.378726 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.378766 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.378778 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.378797 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.378811 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.384355 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.402613 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.418578 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.433882 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.447590 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.461888 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.474078 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.481441 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.481493 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.481515 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.481541 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.481560 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.496451 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.509905 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.523270 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.536867 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.550643 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.565772 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.583812 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.583874 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.583890 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.583909 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.583921 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.686227 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.686262 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.686271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.686286 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.686295 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.789072 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.789141 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.789167 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.789196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.789219 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.891574 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.891614 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.891623 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.891637 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.891647 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.994498 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.994554 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.994571 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.994596 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:47 crc kubenswrapper[5028]: I1123 06:50:47.994615 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:47Z","lastTransitionTime":"2025-11-23T06:50:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.052172 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:48 crc kubenswrapper[5028]: E1123 06:50:48.052350 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.097035 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.097083 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.097096 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.097116 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.097128 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.200045 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.200088 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.200108 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.200133 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.200147 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.246038 5028 generic.go:334] "Generic (PLEG): container finished" podID="5609ffb8-6ac2-4716-8c08-c466b3dd987b" containerID="368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0" exitCode=0 Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.246119 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerDied","Data":"368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.251503 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.270456 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.286392 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.305808 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.308002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.308072 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.308092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.308123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.308145 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.328977 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.348693 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.360067 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.374632 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.395345 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.410879 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.410918 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.410927 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.410942 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.410964 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.412191 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.427583 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.439624 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.453715 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.465263 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.486323 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.500194 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:48Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.513244 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.513293 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.513304 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.513321 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.513333 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.615933 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.615991 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.616001 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.616019 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.616030 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.718068 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.718116 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.718130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.718148 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.718162 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.820816 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.820886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.820910 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.820935 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.820990 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.924159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.924196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.924205 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.924218 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:48 crc kubenswrapper[5028]: I1123 06:50:48.924228 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:48Z","lastTransitionTime":"2025-11-23T06:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.027007 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.027064 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.027082 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.027105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.027122 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.053007 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.053113 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:49 crc kubenswrapper[5028]: E1123 06:50:49.053189 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:49 crc kubenswrapper[5028]: E1123 06:50:49.053291 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.130165 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.130211 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.130224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.130243 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.130256 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.233449 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.233505 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.233523 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.233547 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.233564 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.336281 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.336325 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.336334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.336350 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.336359 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.439240 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.439534 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.439549 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.439566 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.439576 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.545991 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.546044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.546056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.546075 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.546086 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.648887 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.648930 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.648939 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.648967 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.648978 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.750932 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.750988 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.751015 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.751059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.751070 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.853808 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.853847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.853856 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.853871 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.853881 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.956553 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.956589 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.956608 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.956623 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:49 crc kubenswrapper[5028]: I1123 06:50:49.956632 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:49Z","lastTransitionTime":"2025-11-23T06:50:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.052559 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:50 crc kubenswrapper[5028]: E1123 06:50:50.052691 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.059397 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.059435 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.059448 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.059466 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.059480 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.162774 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.162844 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.162855 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.162871 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.162880 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264371 5028 generic.go:334] "Generic (PLEG): container finished" podID="5609ffb8-6ac2-4716-8c08-c466b3dd987b" containerID="a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d" exitCode=0 Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264511 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerDied","Data":"a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264756 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264818 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264838 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264861 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.264879 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.270240 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.270526 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.298156 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.304521 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.322843 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.340779 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.360259 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.368722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.368769 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.368781 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.368801 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.368815 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.372067 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.386837 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.398861 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.434451 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.453006 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.469246 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.471569 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.471609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.471618 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.471637 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.471648 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.485589 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.505096 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.520089 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.538136 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.553318 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.569431 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.574748 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.574784 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.574795 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.574809 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.574820 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.582436 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.596588 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.613139 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.627911 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.644664 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.658902 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.668638 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.677013 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.677065 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.677079 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.677101 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.677115 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.684181 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.703107 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.717358 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.741547 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.753286 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.764978 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.774028 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:50Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.779713 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.779759 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.779768 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.779784 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.779793 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.882334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.882375 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.882385 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.882436 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.882448 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.984554 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.984589 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.984599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.984615 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:50 crc kubenswrapper[5028]: I1123 06:50:50.984624 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:50Z","lastTransitionTime":"2025-11-23T06:50:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.052405 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:51 crc kubenswrapper[5028]: E1123 06:50:51.052589 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.052700 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:51 crc kubenswrapper[5028]: E1123 06:50:51.053011 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.087882 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.087990 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.088011 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.088036 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.088054 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.191503 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.191576 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.191599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.191629 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.191674 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.277755 5028 generic.go:334] "Generic (PLEG): container finished" podID="5609ffb8-6ac2-4716-8c08-c466b3dd987b" containerID="a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9" exitCode=0 Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.277813 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerDied","Data":"a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.278016 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.278669 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.295370 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.295442 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.295457 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.295484 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.295501 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.306404 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.314044 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.328813 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.356050 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.370730 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.390655 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.398059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.398105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.398146 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.398167 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.398180 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.404805 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.422762 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.439808 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.454536 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.473261 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.495778 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.501147 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.501177 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.501187 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.501201 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.501211 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.511605 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.526009 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.544484 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.568870 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.584473 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.604827 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.604884 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.604898 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.604922 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.604934 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.607256 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.628335 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.640876 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.651246 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.660394 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.674046 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.683268 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.701901 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.707515 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.707552 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.707560 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.707575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.707585 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.715648 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.726528 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.737243 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.748138 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.757699 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.767100 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:51Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.809565 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.809601 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.809611 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.809627 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.809639 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.912365 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.912398 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.912408 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.912423 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:51 crc kubenswrapper[5028]: I1123 06:50:51.912432 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:51Z","lastTransitionTime":"2025-11-23T06:50:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.015333 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.015368 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.015377 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.015390 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.015400 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.056169 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.056348 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.118465 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.118514 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.118524 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.118537 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.118547 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.220481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.220522 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.220532 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.220558 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.220567 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.284164 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.285089 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" event={"ID":"5609ffb8-6ac2-4716-8c08-c466b3dd987b","Type":"ContainerStarted","Data":"9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.296691 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.308507 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.321596 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.322893 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.322922 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.322932 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.322981 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.323009 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.338797 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.351620 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.368759 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.384707 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.397844 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.409991 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.418657 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.425516 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.425551 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.425582 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.425597 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.425607 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.431248 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.441204 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.463400 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.479047 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.494097 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.527577 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.527610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.527620 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.527633 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.527644 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.584151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.584431 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.584450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.584473 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.584487 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.595887 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.599185 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.599225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.599238 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.599255 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.599268 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.611066 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.616362 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.616407 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.616417 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.616437 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.616448 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.629088 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.633709 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.633754 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.633764 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.633782 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.633795 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.648620 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.652479 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.652507 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.652517 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.652531 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.652543 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.663092 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:52Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.663249 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.664861 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.664901 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.664914 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.664932 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.664942 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.720496 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.720657 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:51:08.720631436 +0000 UTC m=+52.418036215 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.767107 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.767146 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.767158 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.767173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.767183 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.821755 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.821797 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.821820 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.821846 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.821918 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.821983 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822026 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822039 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822003 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822080 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822090 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822010 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:08.821990896 +0000 UTC m=+52.519395675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822002 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822138 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:08.822113639 +0000 UTC m=+52.519518418 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822165 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:08.82215201 +0000 UTC m=+52.519556789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:50:52 crc kubenswrapper[5028]: E1123 06:50:52.822187 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:08.822181841 +0000 UTC m=+52.519586620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.869548 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.869586 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.869596 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.869610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.869622 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.971746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.971776 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.971789 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.971803 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:52 crc kubenswrapper[5028]: I1123 06:50:52.971814 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:52Z","lastTransitionTime":"2025-11-23T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.052512 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.052613 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:53 crc kubenswrapper[5028]: E1123 06:50:53.052737 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:53 crc kubenswrapper[5028]: E1123 06:50:53.053016 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.073429 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.073474 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.073487 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.073504 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.073514 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.176011 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.176089 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.176109 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.176144 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.176165 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.279454 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.279503 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.279516 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.279536 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.279547 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.288661 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/0.log" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.291269 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f" exitCode=1 Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.291366 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.292782 5028 scope.go:117] "RemoveContainer" containerID="b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.323356 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.338553 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.352136 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.362904 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.381459 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.381612 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.381701 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.381772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.381838 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.383340 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.401818 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.418644 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.432067 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.444874 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.462752 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.475773 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.484054 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.484087 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.484098 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.484114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.484126 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.488296 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.501989 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.535227 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 06:50:53.167798 6291 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:50:53.167855 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:50:53.167886 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:50:53.167920 6291 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:50:53.168055 6291 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:50:53.168074 6291 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:50:53.168088 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 06:50:53.168102 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 06:50:53.168106 6291 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:50:53.168114 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 06:50:53.168128 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:50:53.168134 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:50:53.168229 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:50:53.171725 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:50:53.171784 6291 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.565341 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:53Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.585859 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.585888 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.585897 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.585909 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.585917 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.688320 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.688376 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.688389 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.688411 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.688426 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.790722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.790803 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.790825 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.790856 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.790881 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.893396 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.893477 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.893506 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.893536 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.893562 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.996750 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.996813 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.996829 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.996853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:53 crc kubenswrapper[5028]: I1123 06:50:53.996868 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:53Z","lastTransitionTime":"2025-11-23T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.052925 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:54 crc kubenswrapper[5028]: E1123 06:50:54.053213 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.099446 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.099507 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.099520 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.099541 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.099554 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.202345 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.202388 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.202399 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.202419 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.202431 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.295607 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/0.log" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.298268 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.298386 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.304284 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.304318 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.304327 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.304343 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.304353 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.309350 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.335611 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.349534 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.362118 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.372558 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.384834 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.395637 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.407255 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.407310 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.407322 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.407341 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.407354 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.414292 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.427034 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.447258 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 06:50:53.167798 6291 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:50:53.167855 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:50:53.167886 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:50:53.167920 6291 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:50:53.168055 6291 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:50:53.168074 6291 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:50:53.168088 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 06:50:53.168102 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 06:50:53.168106 6291 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:50:53.168114 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 06:50:53.168128 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:50:53.168134 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:50:53.168229 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:50:53.171725 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:50:53.171784 6291 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.462009 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.474842 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.487727 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.499255 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.510762 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.510804 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.510819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.510838 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.510851 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.514194 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.613609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.613648 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.613661 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.613675 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.613684 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.716590 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.716629 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.716638 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.716654 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.716665 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.819799 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.819849 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.819860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.819878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.819893 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.922421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.922454 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.922474 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.922492 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:54 crc kubenswrapper[5028]: I1123 06:50:54.922502 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:54Z","lastTransitionTime":"2025-11-23T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.024701 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.024733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.024742 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.024755 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.024763 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.052246 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.052269 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:55 crc kubenswrapper[5028]: E1123 06:50:55.052375 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:55 crc kubenswrapper[5028]: E1123 06:50:55.052501 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.127258 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.127307 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.127319 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.127340 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.127353 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.229813 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.229860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.229889 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.229904 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.229914 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.302433 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/1.log" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.303456 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/0.log" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.306344 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e" exitCode=1 Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.306383 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.306416 5028 scope.go:117] "RemoveContainer" containerID="b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.307053 5028 scope.go:117] "RemoveContainer" containerID="3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e" Nov 23 06:50:55 crc kubenswrapper[5028]: E1123 06:50:55.307287 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.320677 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.332096 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.332138 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.332151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.332169 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.332185 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.335202 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.350987 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.365304 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.377548 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.397502 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 06:50:53.167798 6291 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:50:53.167855 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:50:53.167886 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:50:53.167920 6291 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:50:53.168055 6291 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:50:53.168074 6291 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:50:53.168088 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 06:50:53.168102 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 06:50:53.168106 6291 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:50:53.168114 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 06:50:53.168128 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:50:53.168134 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:50:53.168229 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:50:53.171725 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:50:53.171784 6291 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.412747 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2"] Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.413687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.416925 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.417496 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.427680 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.435935 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.436005 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.436018 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.436075 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.436161 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.447344 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2326ae5b-2300-40a4-ae87-be0b1b781af6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.447384 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtk6\" (UniqueName: \"kubernetes.io/projected/2326ae5b-2300-40a4-ae87-be0b1b781af6-kube-api-access-lmtk6\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.447412 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2326ae5b-2300-40a4-ae87-be0b1b781af6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.447438 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2326ae5b-2300-40a4-ae87-be0b1b781af6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.447519 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.459083 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.470663 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.484114 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.496646 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.517070 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.529254 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.539097 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.539160 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.539179 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.539201 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.539218 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.543640 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.548989 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2326ae5b-2300-40a4-ae87-be0b1b781af6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.549035 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtk6\" (UniqueName: \"kubernetes.io/projected/2326ae5b-2300-40a4-ae87-be0b1b781af6-kube-api-access-lmtk6\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.549075 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2326ae5b-2300-40a4-ae87-be0b1b781af6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.549767 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2326ae5b-2300-40a4-ae87-be0b1b781af6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.549817 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2326ae5b-2300-40a4-ae87-be0b1b781af6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.549881 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2326ae5b-2300-40a4-ae87-be0b1b781af6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.555795 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2326ae5b-2300-40a4-ae87-be0b1b781af6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.559050 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.568764 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtk6\" (UniqueName: \"kubernetes.io/projected/2326ae5b-2300-40a4-ae87-be0b1b781af6-kube-api-access-lmtk6\") pod \"ovnkube-control-plane-749d76644c-5dbq2\" (UID: \"2326ae5b-2300-40a4-ae87-be0b1b781af6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.574222 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.585237 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.598032 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.619484 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7e74a220c45a94791c3a764ea2744ffed35d60fe67de38c3a9c16f2a60daf4f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:53Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI1123 06:50:53.167798 6291 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1123 06:50:53.167855 6291 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1123 06:50:53.167886 6291 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1123 06:50:53.167920 6291 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1123 06:50:53.168055 6291 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1123 06:50:53.168074 6291 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1123 06:50:53.168088 6291 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1123 06:50:53.168102 6291 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1123 06:50:53.168106 6291 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1123 06:50:53.168114 6291 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1123 06:50:53.168128 6291 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1123 06:50:53.168134 6291 handler.go:208] Removed *v1.Node event handler 2\\\\nI1123 06:50:53.168229 6291 handler.go:208] Removed *v1.Node event handler 7\\\\nI1123 06:50:53.171725 6291 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1123 06:50:53.171784 6291 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.633781 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.641876 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.641910 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.641920 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.641935 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.641959 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.645038 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.663925 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.676734 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.689207 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.697892 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.709162 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.719668 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.730201 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.738238 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.741706 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.744375 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.744488 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.744603 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.744675 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.745350 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.754737 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:55Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.852704 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.852861 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.852987 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.853024 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.853035 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.955499 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.955818 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.955829 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.955842 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:55 crc kubenswrapper[5028]: I1123 06:50:55.955851 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:55Z","lastTransitionTime":"2025-11-23T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.052672 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:56 crc kubenswrapper[5028]: E1123 06:50:56.052858 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.058929 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.059018 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.059031 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.059075 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.059089 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.160987 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.161031 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.161042 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.161058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.161075 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.263523 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.263600 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.263614 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.263632 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.263644 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.311689 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/1.log" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.315703 5028 scope.go:117] "RemoveContainer" containerID="3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e" Nov 23 06:50:56 crc kubenswrapper[5028]: E1123 06:50:56.315871 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.317326 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" event={"ID":"2326ae5b-2300-40a4-ae87-be0b1b781af6","Type":"ContainerStarted","Data":"75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.317370 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" event={"ID":"2326ae5b-2300-40a4-ae87-be0b1b781af6","Type":"ContainerStarted","Data":"efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.317388 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" event={"ID":"2326ae5b-2300-40a4-ae87-be0b1b781af6","Type":"ContainerStarted","Data":"55301bf89773d5f73b0b0d652287c76af4b72dc39dd1fcd7447a00a88063e9f1"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.340604 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.354318 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.366451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.366480 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.366491 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.366507 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.366519 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.369267 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.379002 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.389404 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.402051 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.414440 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.424497 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.436006 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.450634 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.461340 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.468890 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.469042 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.469122 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.469227 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.469308 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.471727 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.482733 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.501371 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.516245 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.517522 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5ft9z"] Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.518082 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:56 crc kubenswrapper[5028]: E1123 06:50:56.518207 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.527050 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.536557 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.548379 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.559054 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wrpx\" (UniqueName: \"kubernetes.io/projected/bfed01d0-dd8f-478d-991f-4a9242b1c2be-kube-api-access-7wrpx\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.559116 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.560581 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.571596 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.571648 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.571661 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.571677 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.571686 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.583131 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.595767 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.608303 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.619747 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.632800 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.648941 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.660553 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.660616 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wrpx\" (UniqueName: \"kubernetes.io/projected/bfed01d0-dd8f-478d-991f-4a9242b1c2be-kube-api-access-7wrpx\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:56 crc kubenswrapper[5028]: E1123 06:50:56.660726 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:50:56 crc kubenswrapper[5028]: E1123 06:50:56.660794 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:50:57.160777292 +0000 UTC m=+40.858182091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.662775 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.673513 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.673537 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.673547 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.673560 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.673571 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.675382 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.675983 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wrpx\" (UniqueName: \"kubernetes.io/projected/bfed01d0-dd8f-478d-991f-4a9242b1c2be-kube-api-access-7wrpx\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.695147 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.709311 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.717653 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.729012 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.740333 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.751276 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:56Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.776356 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.776380 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.776387 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.776400 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.776408 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.878994 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.879026 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.879036 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.879050 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.879059 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.982125 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.982150 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.982158 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.982169 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:56 crc kubenswrapper[5028]: I1123 06:50:56.982178 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:56Z","lastTransitionTime":"2025-11-23T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.052340 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.052374 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:57 crc kubenswrapper[5028]: E1123 06:50:57.052516 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:57 crc kubenswrapper[5028]: E1123 06:50:57.053147 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.067082 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.077789 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.084682 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.084720 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.084733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.084751 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.084763 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.097866 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.114658 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.126792 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.141222 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.161441 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: E1123 06:50:57.164854 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:50:57 crc kubenswrapper[5028]: E1123 06:50:57.164931 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:50:58.164898588 +0000 UTC m=+41.862303367 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.165172 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.176621 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.187828 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.187863 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.187872 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.187886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.187897 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.190968 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.202114 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.223000 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.238332 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.249774 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.265728 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.279632 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.289408 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.290659 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.290698 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.290711 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.290729 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.290740 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.298750 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.392766 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.392808 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.392818 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.392840 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.392851 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.417102 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.418126 5028 scope.go:117] "RemoveContainer" containerID="3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e" Nov 23 06:50:57 crc kubenswrapper[5028]: E1123 06:50:57.418314 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.495127 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.495165 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.495176 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.495193 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.495206 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.597423 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.597448 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.597457 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.597469 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.597478 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.699444 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.699471 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.699481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.699496 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.699508 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.801831 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.801875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.801889 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.801904 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.801917 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.904733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.904764 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.904774 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.904787 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:57 crc kubenswrapper[5028]: I1123 06:50:57.904798 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:57Z","lastTransitionTime":"2025-11-23T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.007170 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.007231 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.007246 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.007268 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.007287 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.052847 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.052911 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:50:58 crc kubenswrapper[5028]: E1123 06:50:58.053044 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:50:58 crc kubenswrapper[5028]: E1123 06:50:58.053231 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.111800 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.112283 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.112418 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.112642 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.112844 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.176710 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:50:58 crc kubenswrapper[5028]: E1123 06:50:58.177238 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:50:58 crc kubenswrapper[5028]: E1123 06:50:58.177409 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:51:00.177387196 +0000 UTC m=+43.874791995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.215658 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.215707 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.215716 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.215733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.215743 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.317942 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.318007 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.318019 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.318036 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.318046 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.420285 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.420341 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.420353 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.420369 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.420380 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.523280 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.523337 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.523355 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.523373 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.523383 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.625670 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.625720 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.625733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.625747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.625755 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.727750 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.727799 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.727812 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.727828 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.727840 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.830322 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.830385 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.830400 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.830423 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.830440 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.933116 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.933151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.933159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.933173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:58 crc kubenswrapper[5028]: I1123 06:50:58.933181 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:58Z","lastTransitionTime":"2025-11-23T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.036034 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.036081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.036092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.036104 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.036113 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.052279 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:50:59 crc kubenswrapper[5028]: E1123 06:50:59.052360 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.052499 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:50:59 crc kubenswrapper[5028]: E1123 06:50:59.052638 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.138727 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.139123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.139273 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.139377 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.139479 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.243204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.243267 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.243286 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.243311 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.243329 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.346018 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.346069 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.346082 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.346104 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.346118 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.449025 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.449060 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.449073 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.449088 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.449098 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.552569 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.552988 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.553094 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.553194 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.553292 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.657316 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.657398 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.657417 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.657758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.657780 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.761151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.761194 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.761206 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.761223 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.761239 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.864028 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.864076 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.864091 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.864112 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.864127 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.967589 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.967646 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.967666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.967691 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:50:59 crc kubenswrapper[5028]: I1123 06:50:59.967709 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:50:59Z","lastTransitionTime":"2025-11-23T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.052777 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.052997 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:00 crc kubenswrapper[5028]: E1123 06:51:00.053078 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:00 crc kubenswrapper[5028]: E1123 06:51:00.053233 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.070534 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.070610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.070622 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.070638 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.070648 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.174452 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.174530 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.174548 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.174572 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.174590 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.198313 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:00 crc kubenswrapper[5028]: E1123 06:51:00.198477 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:00 crc kubenswrapper[5028]: E1123 06:51:00.198555 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:51:04.19853367 +0000 UTC m=+47.895938449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.276843 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.276896 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.276983 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.277006 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.277018 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.379940 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.380038 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.380056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.380082 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.380099 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.483056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.483123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.483134 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.483148 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.483157 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.589084 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.589451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.589582 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.589682 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.589767 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.695047 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.695555 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.695731 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.695876 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.696057 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.799502 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.799576 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.799601 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.799633 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.799658 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.901986 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.902342 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.902354 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.902369 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:00 crc kubenswrapper[5028]: I1123 06:51:00.902380 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:00Z","lastTransitionTime":"2025-11-23T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.005412 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.005478 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.005499 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.005524 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.005544 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.053229 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:01 crc kubenswrapper[5028]: E1123 06:51:01.053525 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.054190 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:01 crc kubenswrapper[5028]: E1123 06:51:01.054421 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.109058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.109117 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.109152 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.109173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.109190 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.212275 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.212338 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.212351 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.212375 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.212396 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.315418 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.315497 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.315510 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.315530 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.315543 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.418857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.418927 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.418987 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.419021 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.419041 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.523130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.523196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.523212 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.523238 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.523254 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.626894 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.627004 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.627025 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.627053 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.627080 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.730783 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.730854 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.730874 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.730907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.730926 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.834399 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.834476 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.834499 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.834531 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.834552 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.938599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.938659 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.938673 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.938697 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:01 crc kubenswrapper[5028]: I1123 06:51:01.938715 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:01Z","lastTransitionTime":"2025-11-23T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.042044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.042095 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.042113 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.042138 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.042155 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.052619 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.052761 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.052804 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.053033 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.146107 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.146177 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.146205 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.146244 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.146326 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.250455 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.250538 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.250556 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.250587 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.250626 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.354247 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.354297 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.354311 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.354332 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.354351 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.457847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.457898 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.457910 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.457928 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.457941 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.561383 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.561467 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.561486 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.561517 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.561541 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.664109 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.664162 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.664175 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.664195 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.664210 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.767704 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.767747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.767759 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.767779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.767793 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.832287 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.832352 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.832361 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.832378 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.832389 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.853546 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:02Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.859511 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.859583 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.859614 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.859648 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.859670 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.879260 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:02Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.888857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.889469 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.889604 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.889785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.889924 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.907860 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:02Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.912792 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.912842 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.912860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.912886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.912905 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.931905 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:02Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.937673 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.938105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.938420 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.938635 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.938776 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.957428 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:02Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:02 crc kubenswrapper[5028]: E1123 06:51:02.958409 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.960592 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.960819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.961069 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.961261 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:02 crc kubenswrapper[5028]: I1123 06:51:02.961530 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:02Z","lastTransitionTime":"2025-11-23T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.052629 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.052709 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:03 crc kubenswrapper[5028]: E1123 06:51:03.052777 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:03 crc kubenswrapper[5028]: E1123 06:51:03.052841 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.064221 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.064255 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.064265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.064278 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.064289 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.167489 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.167561 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.167578 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.167615 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.167635 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.271030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.271103 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.271127 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.271159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.271185 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.374695 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.374735 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.374747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.374761 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.374773 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.478446 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.478792 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.478885 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.479016 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.479247 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.583675 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.583745 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.583774 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.583806 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.583828 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.687873 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.688003 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.688027 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.688059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.688085 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.790891 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.791270 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.791357 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.791451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.791537 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.894071 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.894112 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.894122 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.894139 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.894151 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.996780 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.996818 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.996830 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.996848 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:03 crc kubenswrapper[5028]: I1123 06:51:03.996860 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:03Z","lastTransitionTime":"2025-11-23T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.052091 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:04 crc kubenswrapper[5028]: E1123 06:51:04.052224 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.052091 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:04 crc kubenswrapper[5028]: E1123 06:51:04.052696 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.101221 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.101674 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.101892 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.102118 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.102324 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.206405 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.206733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.206886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.207056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.207196 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.251299 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:04 crc kubenswrapper[5028]: E1123 06:51:04.251673 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:04 crc kubenswrapper[5028]: E1123 06:51:04.251818 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:51:12.251778055 +0000 UTC m=+55.949182864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.311263 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.311324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.311344 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.311370 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.311387 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.414670 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.414728 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.414746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.414773 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.414797 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.518401 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.518449 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.518458 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.518475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.518486 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.621596 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.621653 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.621662 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.621679 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.621692 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.725584 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.725676 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.725700 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.725737 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.725764 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.830258 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.830313 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.830327 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.830352 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.830366 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.933582 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.933635 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.933643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.933659 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:04 crc kubenswrapper[5028]: I1123 06:51:04.933669 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:04Z","lastTransitionTime":"2025-11-23T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.037722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.037772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.037790 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.037818 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.037837 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.052389 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:05 crc kubenswrapper[5028]: E1123 06:51:05.052564 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.052646 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:05 crc kubenswrapper[5028]: E1123 06:51:05.052859 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.141039 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.141111 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.141130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.141161 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.141183 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.244013 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.244083 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.244100 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.244130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.244148 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.348009 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.348088 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.348110 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.348220 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.348241 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.451819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.451858 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.451882 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.451900 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.451912 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.554753 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.554821 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.554838 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.554863 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.554883 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.658605 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.658665 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.658676 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.658699 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.658714 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.761488 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.761546 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.761562 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.761586 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.761603 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.864837 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.864891 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.864905 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.864924 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.864938 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.968323 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.968392 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.968408 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.968434 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:05 crc kubenswrapper[5028]: I1123 06:51:05.968452 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:05Z","lastTransitionTime":"2025-11-23T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.052522 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.052601 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:06 crc kubenswrapper[5028]: E1123 06:51:06.052663 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:06 crc kubenswrapper[5028]: E1123 06:51:06.052719 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.070593 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.070646 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.070656 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.070675 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.070685 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.173149 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.173191 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.173201 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.173219 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.173229 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.276432 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.276485 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.276494 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.276510 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.276526 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.379680 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.379726 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.379736 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.379758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.379768 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.483442 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.483500 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.483509 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.483527 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.483540 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.586276 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.586350 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.586371 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.586406 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.586429 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.690187 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.690291 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.690311 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.690342 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.690371 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.793440 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.793502 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.793520 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.793549 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.793572 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.897144 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.897204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.897217 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.897236 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:06 crc kubenswrapper[5028]: I1123 06:51:06.897247 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:06Z","lastTransitionTime":"2025-11-23T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.000473 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.000535 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.000556 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.000581 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.000600 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.052921 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.052989 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:07 crc kubenswrapper[5028]: E1123 06:51:07.053212 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:07 crc kubenswrapper[5028]: E1123 06:51:07.053417 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.075219 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.103915 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.104036 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.104057 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.104087 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.104109 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.109856 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.126409 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.145728 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.163800 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.182716 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.197380 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.207859 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.207943 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.207970 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.207990 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.208001 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.213351 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.226212 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.239568 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.256312 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.269419 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.282149 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.295806 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.311404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.311483 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.311510 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.311545 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.311571 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.320748 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.342257 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.357285 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:07Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.415217 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.415261 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.415273 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.415291 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.415340 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.518698 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.518751 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.518762 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.518783 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.518797 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.621568 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.621609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.621618 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.621637 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.621648 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.725315 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.725380 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.725402 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.725436 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.725463 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.829135 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.829225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.829245 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.829279 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.829309 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.933030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.933140 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.933160 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.933187 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:07 crc kubenswrapper[5028]: I1123 06:51:07.933205 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:07Z","lastTransitionTime":"2025-11-23T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.036826 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.036880 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.036893 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.036922 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.036934 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.052473 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.052486 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.052659 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.052902 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.139747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.139823 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.139843 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.139875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.139896 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.242547 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.242596 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.242611 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.242635 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.242650 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.345693 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.345737 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.345776 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.345793 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.345804 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.448410 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.448451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.448462 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.448479 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.448490 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.551412 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.551468 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.551477 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.551493 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.551505 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.655218 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.655265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.655275 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.655293 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.655303 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.757619 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.757658 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.757670 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.757685 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.757694 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.807996 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.808181 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:51:40.808155904 +0000 UTC m=+84.505560683 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.861063 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.861117 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.861126 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.861143 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.861156 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.909871 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.909972 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.910015 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.910037 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910037 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910136 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:40.91011397 +0000 UTC m=+84.607518749 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910201 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910223 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910237 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910252 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910289 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:40.910272294 +0000 UTC m=+84.607677153 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910201 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910330 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910346 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910349 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:40.910323745 +0000 UTC m=+84.607728534 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:51:08 crc kubenswrapper[5028]: E1123 06:51:08.910386 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:51:40.910373896 +0000 UTC m=+84.607778785 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.963819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.963870 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.963881 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.963900 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:08 crc kubenswrapper[5028]: I1123 06:51:08.963910 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:08Z","lastTransitionTime":"2025-11-23T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.053064 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:09 crc kubenswrapper[5028]: E1123 06:51:09.053231 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.053300 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:09 crc kubenswrapper[5028]: E1123 06:51:09.053506 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.066282 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.066321 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.066334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.066352 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.066363 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.168147 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.168181 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.168192 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.168210 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.168223 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.270067 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.270106 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.270118 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.270160 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.270171 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.372359 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.372448 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.372466 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.372493 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.372510 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.476443 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.476511 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.476527 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.476548 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.476569 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.578580 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.578630 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.578643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.578660 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.578671 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.680522 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.680555 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.680565 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.680579 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.680598 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.782982 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.783016 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.783026 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.783041 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.783053 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.890078 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.890137 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.890162 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.890192 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.890215 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.992475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.992535 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.992554 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.992576 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:09 crc kubenswrapper[5028]: I1123 06:51:09.992591 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:09Z","lastTransitionTime":"2025-11-23T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.052689 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.052791 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:10 crc kubenswrapper[5028]: E1123 06:51:10.052864 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:10 crc kubenswrapper[5028]: E1123 06:51:10.053083 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.054842 5028 scope.go:117] "RemoveContainer" containerID="3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.095764 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.095826 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.095842 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.095862 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.096294 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.198560 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.198646 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.198665 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.198733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.198746 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.302395 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.302442 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.302455 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.302473 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.302484 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.404770 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.404808 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.404819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.404867 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.404879 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.507867 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.507908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.507920 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.507938 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.507967 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.610121 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.610155 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.610163 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.610180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.610191 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.713204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.713241 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.713252 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.713269 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.713280 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.814926 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.814985 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.814995 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.815008 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.815018 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.917623 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.917676 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.917690 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.917707 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:10 crc kubenswrapper[5028]: I1123 06:51:10.917721 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:10Z","lastTransitionTime":"2025-11-23T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.020544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.020636 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.020645 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.020662 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.020673 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.052379 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.052397 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:11 crc kubenswrapper[5028]: E1123 06:51:11.052511 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:11 crc kubenswrapper[5028]: E1123 06:51:11.052665 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.122733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.122770 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.122779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.122797 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.122806 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.224611 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.224643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.224653 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.224666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.224675 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.327028 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.327072 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.327081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.327101 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.327111 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.380734 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/1.log" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.383674 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.384301 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.417827 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.429906 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.429976 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.429985 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.430002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.430011 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.439714 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.454784 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.472883 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.490518 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.503087 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.515497 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.525734 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.532830 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.532877 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.532886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.532900 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.532909 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.536234 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.558518 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.568971 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.580521 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.591409 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.603151 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.613548 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.624752 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.635022 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.635061 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.635073 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.635091 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.635102 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.639673 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:11Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.737536 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.737574 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.737586 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.737603 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.737613 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.840308 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.840349 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.840359 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.840373 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.840383 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.943269 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.943655 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.943798 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.943928 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:11 crc kubenswrapper[5028]: I1123 06:51:11.944085 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:11Z","lastTransitionTime":"2025-11-23T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.048752 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.048857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.048878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.048908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.048928 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.053105 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:12 crc kubenswrapper[5028]: E1123 06:51:12.053315 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.053126 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:12 crc kubenswrapper[5028]: E1123 06:51:12.053612 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.151794 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.151840 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.151850 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.151863 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.151872 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.255195 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.255274 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.255290 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.255313 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.255329 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.343458 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:12 crc kubenswrapper[5028]: E1123 06:51:12.343649 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:12 crc kubenswrapper[5028]: E1123 06:51:12.343716 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:51:28.343698168 +0000 UTC m=+72.041102957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.358035 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.358076 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.358087 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.358103 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.358114 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.389223 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/2.log" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.389839 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/1.log" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.393052 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c" exitCode=1 Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.393100 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.393148 5028 scope.go:117] "RemoveContainer" containerID="3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.394657 5028 scope.go:117] "RemoveContainer" containerID="3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c" Nov 23 06:51:12 crc kubenswrapper[5028]: E1123 06:51:12.395108 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.411588 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.428003 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.445696 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.460842 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.461628 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.461658 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.461666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.461680 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.461690 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.477742 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.504754 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a9b095195942662d103b5a3a458a86714f50eb69a54f00c525b323e66a4287e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"message\\\":\\\"Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:50:54Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:50:54.451327 6473 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1123 06:50:54.4513\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:53Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.524973 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.535553 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.552314 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.564758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.564930 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.565019 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.565093 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.565151 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.566786 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.579704 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.590537 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.603344 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.615565 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.628930 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.641420 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.654771 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:12Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.667649 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.667694 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.667703 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.667720 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.667733 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.769803 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.769840 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.769852 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.769875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.769887 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.872455 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.872488 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.872496 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.872509 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.872519 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.974725 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.974776 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.974792 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.974813 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:12 crc kubenswrapper[5028]: I1123 06:51:12.974831 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:12Z","lastTransitionTime":"2025-11-23T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.016061 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.016104 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.016114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.016129 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.016139 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.037135 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.042156 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.042185 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.042196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.042209 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.042218 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.052941 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.053013 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.053073 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.053146 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.055480 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.059371 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.059485 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.059562 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.059643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.059733 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.071404 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.074916 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.075044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.075128 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.075220 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.075299 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.086236 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.089895 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.089920 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.090145 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.090163 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.090175 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.106661 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.106780 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.108794 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.108813 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.108821 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.108833 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.108841 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.210607 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.210638 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.210649 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.210668 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.210679 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.313340 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.313374 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.313382 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.313397 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.313405 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.400774 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/2.log" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.404640 5028 scope.go:117] "RemoveContainer" containerID="3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c" Nov 23 06:51:13 crc kubenswrapper[5028]: E1123 06:51:13.404781 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.415227 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.415254 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.415264 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.415275 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.415285 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.416854 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.429685 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.443143 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.460118 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.471686 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.479809 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.493761 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.510520 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.517688 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.517755 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.517773 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.517799 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.517817 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.524034 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.536011 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.548339 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.562365 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.576378 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.591222 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.601018 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.621521 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.621570 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.621585 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.621605 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.621622 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.632335 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.648859 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.724560 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.724615 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.724632 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.724654 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.724672 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.802376 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.812159 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.819035 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.827504 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.827533 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.827545 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.827562 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.827572 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.829338 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.853187 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.868098 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.879937 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.890595 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.901385 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.912517 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.922465 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.930272 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.930310 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.930321 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.930338 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.930353 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:13Z","lastTransitionTime":"2025-11-23T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.934306 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.950469 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.963833 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.974204 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:13 crc kubenswrapper[5028]: I1123 06:51:13.994696 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:13Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.007894 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.019411 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.031039 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:14Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.033567 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.033604 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.033616 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.033635 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.033646 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.052829 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:14 crc kubenswrapper[5028]: E1123 06:51:14.052922 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.052830 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:14 crc kubenswrapper[5028]: E1123 06:51:14.053188 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.135764 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.135795 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.135802 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.135814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.135823 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.238036 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.238070 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.238100 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.238114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.238123 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.339872 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.339909 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.339965 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.339983 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.339994 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.442634 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.442692 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.442705 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.442721 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.442731 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.545288 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.545334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.545348 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.545367 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.545380 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.647450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.647480 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.647488 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.647505 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.647516 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.750104 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.750158 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.750166 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.750180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.750190 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.852394 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.852427 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.852438 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.852453 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.852464 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.955538 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.955600 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.955621 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.955645 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:14 crc kubenswrapper[5028]: I1123 06:51:14.955661 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:14Z","lastTransitionTime":"2025-11-23T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.052416 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.052475 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:15 crc kubenswrapper[5028]: E1123 06:51:15.052645 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:15 crc kubenswrapper[5028]: E1123 06:51:15.052796 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.057451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.057489 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.057525 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.057544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.057556 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.160448 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.160531 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.160547 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.160563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.160576 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.263135 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.263210 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.263234 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.263265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.263287 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.365554 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.365599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.365616 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.365637 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.365654 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.471262 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.471306 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.471323 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.471519 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.471537 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.574337 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.574403 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.574421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.574445 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.574462 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.676578 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.676626 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.676643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.676663 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.676678 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.779129 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.779171 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.779182 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.779196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.779205 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.881357 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.881397 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.881407 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.881421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.881430 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.984398 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.984446 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.984457 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.984475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:15 crc kubenswrapper[5028]: I1123 06:51:15.984489 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:15Z","lastTransitionTime":"2025-11-23T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.052506 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.052530 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:16 crc kubenswrapper[5028]: E1123 06:51:16.052625 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:16 crc kubenswrapper[5028]: E1123 06:51:16.052773 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.087386 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.087424 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.087435 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.087450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.087461 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.189708 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.189769 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.189779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.189793 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.189802 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.293412 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.293469 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.293485 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.293506 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.293521 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.396430 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.396484 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.396495 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.396511 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.396522 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.499096 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.499147 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.499159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.499178 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.499192 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.603727 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.603793 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.603818 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.603848 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.603870 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.705867 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.705920 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.705933 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.705973 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.706044 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.808834 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.808866 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.808876 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.808890 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.808901 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.911107 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.911144 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.911155 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.911172 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:16 crc kubenswrapper[5028]: I1123 06:51:16.911185 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:16Z","lastTransitionTime":"2025-11-23T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.013680 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.013718 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.013729 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.013746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.013758 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.052166 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.052218 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:17 crc kubenswrapper[5028]: E1123 06:51:17.052284 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:17 crc kubenswrapper[5028]: E1123 06:51:17.052427 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.067230 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.084811 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.099188 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.114475 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.123760 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.123830 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.123852 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.123878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.123901 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.133554 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.143219 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.157410 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.175991 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.189626 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.201268 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.221874 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.227682 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.227725 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.227738 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.227775 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.227790 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.236554 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.254169 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.274750 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.297420 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.310869 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.330765 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.330839 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.330857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.330878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.330913 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.345647 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.365424 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:17Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.434044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.434095 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.434111 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.434131 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.434146 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.537526 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.537573 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.537589 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.537611 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.537625 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.640827 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.640862 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.640874 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.640889 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.640900 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.743704 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.743750 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.743762 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.743779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.743790 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.845942 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.846418 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.846435 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.846461 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.846478 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.949281 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.949310 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.949319 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.949332 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:17 crc kubenswrapper[5028]: I1123 06:51:17.949340 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:17Z","lastTransitionTime":"2025-11-23T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.051765 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.051793 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.051802 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.051815 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.051824 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.052260 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.052301 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:18 crc kubenswrapper[5028]: E1123 06:51:18.052398 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:18 crc kubenswrapper[5028]: E1123 06:51:18.052589 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.154360 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.154393 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.154401 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.154414 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.154422 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.256513 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.256559 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.256568 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.256586 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.256595 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.358371 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.358404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.358412 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.358425 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.358433 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.460991 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.461019 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.461028 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.461041 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.461050 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.563740 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.563780 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.563790 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.563807 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.563818 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.667626 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.667683 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.667702 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.667780 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.667805 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.770518 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.770546 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.770554 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.770566 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.770578 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.873736 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.873792 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.873812 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.873836 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.873854 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.977036 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.977089 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.977103 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.977123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:18 crc kubenswrapper[5028]: I1123 06:51:18.977137 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:18Z","lastTransitionTime":"2025-11-23T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.052589 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.052750 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:19 crc kubenswrapper[5028]: E1123 06:51:19.052897 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:19 crc kubenswrapper[5028]: E1123 06:51:19.053204 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.079884 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.079924 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.079990 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.080017 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.080031 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.183064 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.183129 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.183153 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.183183 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.183205 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.285804 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.285861 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.285871 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.285886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.285896 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.388651 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.388686 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.388696 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.388711 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.388721 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.491334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.491369 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.491378 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.491391 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.491401 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.594475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.594521 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.594533 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.594551 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.594564 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.697746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.697812 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.697829 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.697853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.697870 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.800539 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.800594 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.800605 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.800624 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.800635 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.904310 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.904352 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.904364 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.904383 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:19 crc kubenswrapper[5028]: I1123 06:51:19.904398 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:19Z","lastTransitionTime":"2025-11-23T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.006999 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.007044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.007055 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.007073 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.007090 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.052882 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.052934 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:20 crc kubenswrapper[5028]: E1123 06:51:20.053153 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:20 crc kubenswrapper[5028]: E1123 06:51:20.053385 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.109727 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.109795 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.109814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.109842 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.109862 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.212580 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.212627 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.212636 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.212677 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.212690 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.316173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.316251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.316272 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.316298 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.316315 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.420297 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.420370 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.420390 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.420417 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.420438 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.524325 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.524400 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.524421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.524448 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.524470 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.628744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.628819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.628841 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.628870 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.628887 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.731856 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.731938 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.731997 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.732034 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.732065 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.835350 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.835412 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.835430 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.835457 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.835476 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.938477 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.938510 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.938544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.938558 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:20 crc kubenswrapper[5028]: I1123 06:51:20.938567 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:20Z","lastTransitionTime":"2025-11-23T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.040860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.040882 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.040890 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.040903 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.040912 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.054482 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:21 crc kubenswrapper[5028]: E1123 06:51:21.054578 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.054742 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:21 crc kubenswrapper[5028]: E1123 06:51:21.054798 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.143814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.143846 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.143859 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.143874 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.143885 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.246454 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.246565 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.246585 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.246609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.246628 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.349065 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.349112 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.349124 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.349139 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.349148 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.451608 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.451671 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.451692 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.451717 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.451734 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.555513 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.555552 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.555570 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.555594 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.555611 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.658616 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.658645 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.658656 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.658672 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.658686 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.761905 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.761981 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.762000 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.762023 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.762041 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.864186 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.864238 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.864251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.864269 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.864284 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.966890 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.966931 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.966940 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.966972 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:21 crc kubenswrapper[5028]: I1123 06:51:21.966986 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:21Z","lastTransitionTime":"2025-11-23T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.052762 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.052823 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:22 crc kubenswrapper[5028]: E1123 06:51:22.052884 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:22 crc kubenswrapper[5028]: E1123 06:51:22.052989 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.069196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.069214 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.069222 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.069233 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.069243 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.171689 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.171719 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.171728 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.171741 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.171750 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.273844 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.273878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.273889 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.273906 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.273915 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.376634 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.376672 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.376685 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.376699 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.376709 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.479453 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.479537 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.479556 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.479582 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.479600 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.582288 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.582335 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.582349 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.582367 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.582380 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.684434 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.684472 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.684482 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.684501 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.684513 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.833215 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.833258 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.833267 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.833284 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.833296 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.935754 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.935793 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.935803 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.935817 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:22 crc kubenswrapper[5028]: I1123 06:51:22.935827 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:22Z","lastTransitionTime":"2025-11-23T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.038806 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.038866 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.038883 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.038906 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.038930 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.052168 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.052286 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.052181 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.052529 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.141015 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.141069 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.141082 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.141098 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.141108 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.243884 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.243924 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.243962 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.243979 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.243990 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.251704 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.251729 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.251737 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.251747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.251756 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.263028 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.266192 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.266224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.266232 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.266247 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.266256 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.278584 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.282189 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.282227 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.282238 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.282254 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.282266 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.293811 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.296609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.296636 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.296646 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.296660 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.296671 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.308324 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.311752 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.311782 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.311791 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.311805 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.311816 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.331137 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:23Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:23 crc kubenswrapper[5028]: E1123 06:51:23.331285 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.345825 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.345857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.345868 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.345885 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.345896 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.448210 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.448244 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.448253 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.448265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.448276 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.550596 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.550636 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.550647 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.550685 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.550699 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.652841 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.652876 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.652887 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.652900 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.652909 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.755462 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.755502 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.755536 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.755551 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.755562 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.858271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.858319 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.858334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.858351 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.858365 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.960919 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.961050 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.961070 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.961095 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:23 crc kubenswrapper[5028]: I1123 06:51:23.961114 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:23Z","lastTransitionTime":"2025-11-23T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.052706 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.052737 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:24 crc kubenswrapper[5028]: E1123 06:51:24.052846 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:24 crc kubenswrapper[5028]: E1123 06:51:24.053012 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.085638 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.085713 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.085734 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.085759 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.085777 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.188434 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.188482 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.188494 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.188512 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.188525 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.291231 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.291265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.291276 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.291295 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.291313 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.393128 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.393173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.393184 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.393200 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.393209 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.495633 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.495675 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.495687 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.495703 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.495716 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.598014 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.598056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.598067 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.598084 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.598097 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.700280 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.700326 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.700335 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.700350 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.700359 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.802693 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.803044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.803180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.803272 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.803353 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.906000 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.906299 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.906369 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.906474 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:24 crc kubenswrapper[5028]: I1123 06:51:24.906546 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:24Z","lastTransitionTime":"2025-11-23T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.008748 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.009076 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.009145 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.009211 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.009274 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.052492 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:25 crc kubenswrapper[5028]: E1123 06:51:25.052644 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.052509 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:25 crc kubenswrapper[5028]: E1123 06:51:25.052877 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.065480 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.111745 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.111781 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.111792 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.111806 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.111816 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.214385 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.214428 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.214439 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.214453 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.214463 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.316721 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.316801 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.316823 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.316860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.316883 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.419053 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.419154 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.419173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.419198 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.419215 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.521563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.521609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.521618 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.521636 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.521647 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.625134 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.625203 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.625225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.625257 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.625278 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.728162 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.728208 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.728221 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.728237 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.728249 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.831542 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.831617 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.831643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.831675 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.831699 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.934403 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.934460 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.934471 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.934487 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:25 crc kubenswrapper[5028]: I1123 06:51:25.934496 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:25Z","lastTransitionTime":"2025-11-23T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.037677 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.037985 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.038076 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.038162 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.038257 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.052151 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:26 crc kubenswrapper[5028]: E1123 06:51:26.052360 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.052376 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:26 crc kubenswrapper[5028]: E1123 06:51:26.052492 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.141355 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.141388 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.141397 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.141410 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.141419 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.243706 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.243744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.243754 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.243770 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.243780 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.345895 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.346001 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.346024 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.346056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.346080 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.448283 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.448334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.448344 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.448361 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.448376 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.550486 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.550540 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.550557 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.550580 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.550597 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.652877 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.653137 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.653226 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.653333 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.653425 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.755930 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.755983 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.755996 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.756025 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.756035 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.858404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.858703 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.858777 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.858876 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.859000 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.961299 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.961331 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.961339 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.961352 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:26 crc kubenswrapper[5028]: I1123 06:51:26.961362 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:26Z","lastTransitionTime":"2025-11-23T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.052301 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.052466 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:27 crc kubenswrapper[5028]: E1123 06:51:27.052759 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:27 crc kubenswrapper[5028]: E1123 06:51:27.053088 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.063025 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.063058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.063068 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.063091 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.063101 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.067089 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.080934 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.091809 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.111808 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.125215 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.140925 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.155832 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.165030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.165291 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.165359 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.165593 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.166070 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.170965 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.184022 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.196831 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.210217 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.222630 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.242796 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.255683 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.265294 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.269378 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.269590 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.269665 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.269728 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.269811 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.277963 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.290174 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.299090 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.311154 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:27Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.372186 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.372222 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.372233 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.372248 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.372260 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.474124 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.474168 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.474185 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.474202 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.474214 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.576849 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.576902 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.576913 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.576933 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.576961 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.679820 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.680244 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.680329 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.680409 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.680485 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.783518 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.783571 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.783584 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.783606 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.783620 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.887473 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.888197 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.888514 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.888704 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.888896 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.992674 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.992739 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.992758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.992785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:27 crc kubenswrapper[5028]: I1123 06:51:27.992804 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:27Z","lastTransitionTime":"2025-11-23T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.052674 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:28 crc kubenswrapper[5028]: E1123 06:51:28.053939 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.053838 5028 scope.go:117] "RemoveContainer" containerID="3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c" Nov 23 06:51:28 crc kubenswrapper[5028]: E1123 06:51:28.054505 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.053102 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:28 crc kubenswrapper[5028]: E1123 06:51:28.058573 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.095241 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.095284 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.095320 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.095342 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.095354 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.198853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.198907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.198921 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.198939 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.198976 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.301030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.301078 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.301090 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.301105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.301116 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.403768 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.403817 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.403828 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.403847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.403861 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.418515 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:28 crc kubenswrapper[5028]: E1123 06:51:28.418694 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:28 crc kubenswrapper[5028]: E1123 06:51:28.418767 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:52:00.418746054 +0000 UTC m=+104.116150833 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.506208 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.506251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.506264 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.506279 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.506291 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.609666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.609722 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.609735 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.609757 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.609772 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.711835 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.711875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.711888 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.711905 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.711918 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.814126 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.814171 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.814183 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.814204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.814218 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.917264 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.917324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.917338 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.917357 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:28 crc kubenswrapper[5028]: I1123 06:51:28.917370 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:28Z","lastTransitionTime":"2025-11-23T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.020509 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.020543 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.020552 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.020565 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.020575 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.053051 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.053161 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:29 crc kubenswrapper[5028]: E1123 06:51:29.053181 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:29 crc kubenswrapper[5028]: E1123 06:51:29.053356 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.123835 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.124204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.124287 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.124367 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.124460 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.227664 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.227705 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.227719 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.227738 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.227752 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.331361 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.331414 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.331426 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.331445 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.331462 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.433944 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.434023 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.434034 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.434054 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.434067 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.537029 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.537086 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.537098 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.537120 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.537131 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.639659 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.639716 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.639727 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.639743 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.639754 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.742751 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.742811 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.742825 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.742849 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.742865 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.846759 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.846824 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.846843 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.846870 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.846892 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.950563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.950647 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.950701 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.950734 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:29 crc kubenswrapper[5028]: I1123 06:51:29.950811 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:29Z","lastTransitionTime":"2025-11-23T06:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.052125 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.052197 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:30 crc kubenswrapper[5028]: E1123 06:51:30.052285 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:30 crc kubenswrapper[5028]: E1123 06:51:30.052463 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.055609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.055646 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.055655 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.055669 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.055679 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.157759 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.157814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.157826 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.157854 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.157869 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.260705 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.260763 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.260779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.260803 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.260819 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.363364 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.363427 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.363450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.363483 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.363504 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.465703 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.465738 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.465746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.465761 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.465775 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.477645 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/0.log" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.477708 5028 generic.go:334] "Generic (PLEG): container finished" podID="e634c65f-8585-4d5d-b929-b9e1255f8921" containerID="34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321" exitCode=1 Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.477747 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerDied","Data":"34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.478386 5028 scope.go:117] "RemoveContainer" containerID="34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.494357 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.505304 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.521616 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.532321 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.544899 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.556580 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.568445 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.568475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.568483 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.568495 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.568504 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.574213 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.589755 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.601332 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.613441 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.625109 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.639773 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.652392 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.663858 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.671047 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.671089 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.671101 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.671118 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.671132 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.677144 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.697574 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.712206 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.725667 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.767362 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:30Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.774183 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.774228 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.774240 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.774257 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.774268 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.876697 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.876733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.876744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.876758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.876770 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.978878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.978914 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.978925 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.978941 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:30 crc kubenswrapper[5028]: I1123 06:51:30.978972 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:30Z","lastTransitionTime":"2025-11-23T06:51:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.052007 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.052066 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:31 crc kubenswrapper[5028]: E1123 06:51:31.052133 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:31 crc kubenswrapper[5028]: E1123 06:51:31.052279 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.081367 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.081395 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.081404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.081416 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.081424 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.183602 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.183634 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.183642 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.183657 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.183669 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.286040 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.286079 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.286090 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.286105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.286115 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.388744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.388774 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.388782 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.388794 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.388802 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.481747 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/0.log" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.481802 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerStarted","Data":"f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.490582 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.490610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.490618 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.490631 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.490641 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.495187 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.505646 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.525896 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.542095 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.553836 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.564601 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.584075 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.593233 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.593274 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.593286 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.593304 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.593316 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.595505 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.607138 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.619900 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.631012 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.640894 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.657168 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.669015 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.677635 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.687798 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.695645 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.695672 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.695681 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.695696 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.695705 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.699170 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.709413 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.722625 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:31Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.798655 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.798713 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.798732 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.798756 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.798772 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.901693 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.901755 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.901776 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.901805 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:31 crc kubenswrapper[5028]: I1123 06:51:31.901823 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:31Z","lastTransitionTime":"2025-11-23T06:51:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.004638 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.004677 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.004686 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.004700 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.004709 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.052290 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:32 crc kubenswrapper[5028]: E1123 06:51:32.052462 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.052726 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:32 crc kubenswrapper[5028]: E1123 06:51:32.052831 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.107542 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.107582 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.107592 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.107610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.107621 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.209810 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.209842 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.209853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.209867 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.209875 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.313616 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.313691 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.313714 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.313744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.313766 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.417244 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.417324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.417343 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.417376 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.417397 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.520358 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.520399 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.520408 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.520423 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.520467 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.623858 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.623927 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.623977 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.624005 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.624025 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.726696 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.726772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.726783 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.726808 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.726821 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.829881 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.829941 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.829969 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.829992 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.830006 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.933124 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.933184 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.933204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.933229 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:32 crc kubenswrapper[5028]: I1123 06:51:32.933245 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:32Z","lastTransitionTime":"2025-11-23T06:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.036744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.036817 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.036845 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.036874 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.036895 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.052822 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.052888 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.053096 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.053217 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.139848 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.139882 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.139892 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.139908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.139918 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.242413 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.242443 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.242453 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.242468 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.242477 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.345772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.345819 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.345835 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.345858 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.345875 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.448796 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.448847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.448857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.448873 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.448884 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.551351 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.551388 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.551447 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.551462 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.551472 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.653515 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.653553 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.653567 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.653585 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.653596 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.712622 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.712674 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.712687 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.712703 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.712714 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.729425 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.733303 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.733347 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.733363 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.733384 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.733399 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.747641 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.751907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.752029 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.752057 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.752092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.752113 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.770742 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.775682 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.775744 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.775767 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.775796 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.775818 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.790313 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.794832 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.794898 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.794934 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.794989 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.795010 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.813936 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:33Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:33 crc kubenswrapper[5028]: E1123 06:51:33.814195 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.816204 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.816285 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.816305 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.816330 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.816394 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.919846 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.919907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.919930 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.920003 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:33 crc kubenswrapper[5028]: I1123 06:51:33.920030 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:33Z","lastTransitionTime":"2025-11-23T06:51:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.022849 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.022899 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.022922 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.022966 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.022982 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.052697 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.052735 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:34 crc kubenswrapper[5028]: E1123 06:51:34.052851 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:34 crc kubenswrapper[5028]: E1123 06:51:34.052988 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.125042 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.125086 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.125097 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.125114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.125124 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.227477 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.227529 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.227543 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.227561 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.227572 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.330564 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.330662 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.330679 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.330701 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.330716 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.434144 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.434282 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.434302 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.434328 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.434345 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.537198 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.537254 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.537274 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.537297 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.537314 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.641292 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.641359 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.641378 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.641404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.641422 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.744920 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.745027 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.745051 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.745077 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.745100 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.848698 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.848759 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.848778 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.848801 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.848821 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.951368 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.951397 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.951405 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.951420 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:34 crc kubenswrapper[5028]: I1123 06:51:34.951429 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:34Z","lastTransitionTime":"2025-11-23T06:51:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.052333 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.052355 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:35 crc kubenswrapper[5028]: E1123 06:51:35.052739 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:35 crc kubenswrapper[5028]: E1123 06:51:35.052863 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.055213 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.056193 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.056222 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.056271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.056299 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.159652 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.159717 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.159743 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.159772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.159792 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.262009 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.262047 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.262058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.262074 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.262086 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.365507 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.365567 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.365584 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.365619 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.365639 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.468258 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.468336 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.468376 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.468401 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.468415 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.570699 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.570745 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.570758 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.570777 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.570793 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.673123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.673156 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.673169 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.673185 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.673197 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.777049 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.777354 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.777567 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.777786 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.777938 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.880867 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.881375 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.881656 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.881907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.882113 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.985517 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.986006 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.986185 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.986386 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:35 crc kubenswrapper[5028]: I1123 06:51:35.986555 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:35Z","lastTransitionTime":"2025-11-23T06:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.052812 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.052870 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:36 crc kubenswrapper[5028]: E1123 06:51:36.053053 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:36 crc kubenswrapper[5028]: E1123 06:51:36.053171 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.091222 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.091570 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.091765 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.091933 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.092152 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.195220 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.195682 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.195910 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.196203 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.196428 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.301044 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.301105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.301126 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.301150 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.301172 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.404575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.404623 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.404640 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.404671 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.404690 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.507103 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.507163 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.507180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.507205 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.507223 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.610105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.610172 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.610198 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.610227 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.610284 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.713430 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.713467 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.713479 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.713497 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.713509 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.816252 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.816298 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.816324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.816340 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.816350 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.919410 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.919479 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.919513 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.919557 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:36 crc kubenswrapper[5028]: I1123 06:51:36.919581 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:36Z","lastTransitionTime":"2025-11-23T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.023164 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.023226 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.023249 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.023279 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.023306 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.053134 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.053183 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:37 crc kubenswrapper[5028]: E1123 06:51:37.053385 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:37 crc kubenswrapper[5028]: E1123 06:51:37.053491 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.069394 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.088776 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.108897 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.124645 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.126170 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.126199 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.126210 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.126229 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.126242 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.139379 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.153297 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.165845 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.181756 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.194944 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.208574 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.223060 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.228475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.228519 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.228531 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.228549 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.228560 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.241914 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.253618 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.265503 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.283705 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.295543 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.308115 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.322984 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.331035 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.331080 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.331092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.331112 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.331124 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.333909 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:37Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.433837 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.433886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.433899 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.433917 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.433931 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.536424 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.536472 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.536481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.536493 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.536502 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.639113 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.639140 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.639147 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.639159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.639168 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.742450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.742512 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.742529 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.742561 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.742578 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.844997 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.845051 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.845063 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.845081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.845092 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.948238 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.948307 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.948334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.948366 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:37 crc kubenswrapper[5028]: I1123 06:51:37.948388 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:37Z","lastTransitionTime":"2025-11-23T06:51:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.051008 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.051069 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.051087 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.051110 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.051127 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.052560 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.052570 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:38 crc kubenswrapper[5028]: E1123 06:51:38.052746 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:38 crc kubenswrapper[5028]: E1123 06:51:38.052888 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.155643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.155730 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.155768 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.155800 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.155820 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.258321 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.258418 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.258451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.258484 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.258507 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.361685 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.361749 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.361770 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.361795 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.361816 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.465402 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.465466 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.465490 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.465519 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.465543 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.568716 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.568779 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.568803 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.568832 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.568856 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.671813 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.671881 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.671933 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.672000 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.672015 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.774336 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.774398 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.774415 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.774439 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.774460 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.876975 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.877021 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.877033 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.877050 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.877062 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.979560 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.979620 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.979631 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.979652 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:38 crc kubenswrapper[5028]: I1123 06:51:38.979667 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:38Z","lastTransitionTime":"2025-11-23T06:51:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.053104 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.053135 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:39 crc kubenswrapper[5028]: E1123 06:51:39.053256 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:39 crc kubenswrapper[5028]: E1123 06:51:39.053356 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.082274 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.082311 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.082326 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.082371 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.082388 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.185733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.185797 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.185817 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.185843 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.185864 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.288527 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.288555 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.288563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.288575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.288584 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.392130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.392207 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.392226 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.392257 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.392275 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.495761 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.495838 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.495856 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.495886 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.495907 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.599317 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.599386 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.599405 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.599435 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.599453 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.702297 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.702351 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.702363 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.702383 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.702394 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.804121 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.804194 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.804213 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.804231 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.804242 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.906844 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.906882 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.906891 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.906908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:39 crc kubenswrapper[5028]: I1123 06:51:39.906917 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:39Z","lastTransitionTime":"2025-11-23T06:51:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.009871 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.009911 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.009921 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.009937 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.009967 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.052230 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.052399 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.052230 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.053101 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.113152 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.113197 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.113210 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.113228 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.113240 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.216195 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.216243 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.216251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.216264 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.216275 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.319159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.319224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.319242 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.319271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.319293 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.421126 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.421166 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.421176 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.421193 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.421204 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.523222 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.523265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.523276 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.523293 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.523304 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.626102 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.626444 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.626932 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.629832 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.629861 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.732535 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.732562 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.732570 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.732584 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.732595 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.835576 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.835604 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.835613 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.835626 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.835635 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.852538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.852731 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.852700231 +0000 UTC m=+148.550105020 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.938194 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.938259 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.938284 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.938318 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.938341 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:40Z","lastTransitionTime":"2025-11-23T06:51:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.954348 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.954436 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.954482 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:40 crc kubenswrapper[5028]: I1123 06:51:40.954528 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954562 5028 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954652 5028 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954684 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.954653974 +0000 UTC m=+148.652058853 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954702 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954735 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.954711896 +0000 UTC m=+148.652116715 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954736 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954763 5028 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954798 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954830 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.954809888 +0000 UTC m=+148.652214747 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954848 5028 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954869 5028 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:40 crc kubenswrapper[5028]: E1123 06:51:40.954974 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.954919831 +0000 UTC m=+148.652324650 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.041225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.041287 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.041305 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.041329 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.041349 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.053007 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:41 crc kubenswrapper[5028]: E1123 06:51:41.053239 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.053045 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:41 crc kubenswrapper[5028]: E1123 06:51:41.053474 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.144478 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.145020 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.145210 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.145381 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.145533 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.249105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.250001 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.250182 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.250345 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.250498 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.353420 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.353456 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.353466 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.353480 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.353492 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.456568 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.456619 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.456640 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.456670 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.456692 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.558890 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.558921 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.558931 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.558969 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.558980 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.662548 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.662632 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.662654 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.662681 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.662704 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.766481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.766546 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.766563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.766587 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.766607 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.869594 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.869680 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.869703 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.869733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.869755 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.973139 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.973202 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.973226 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.973254 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:41 crc kubenswrapper[5028]: I1123 06:51:41.973279 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:41Z","lastTransitionTime":"2025-11-23T06:51:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.052791 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.052791 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:42 crc kubenswrapper[5028]: E1123 06:51:42.053296 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:42 crc kubenswrapper[5028]: E1123 06:51:42.053887 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.054373 5028 scope.go:117] "RemoveContainer" containerID="3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.076393 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.076855 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.076883 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.076917 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.076940 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.179413 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.179469 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.179516 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.179541 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.179601 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.282681 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.282736 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.282755 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.282782 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.282799 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.386135 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.386187 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.386199 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.386218 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.386233 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.490511 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.490587 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.490606 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.490632 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.490652 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.593324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.593385 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.593407 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.593433 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.593455 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.695846 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.695920 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.695937 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.695989 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.696044 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.798539 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.798564 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.798573 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.798586 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.798595 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.900449 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.900804 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.900902 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.900995 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:42 crc kubenswrapper[5028]: I1123 06:51:42.901066 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:42Z","lastTransitionTime":"2025-11-23T06:51:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.011630 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.012136 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.012734 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.012845 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.012943 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.052648 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.052785 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:43 crc kubenswrapper[5028]: E1123 06:51:43.052937 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:43 crc kubenswrapper[5028]: E1123 06:51:43.053063 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.116097 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.116143 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.116158 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.116178 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.116193 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.218113 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.218134 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.218142 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.218154 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.218163 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.320571 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.320599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.320607 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.320619 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.320628 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.423549 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.423593 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.423601 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.423616 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.423625 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.521758 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/2.log" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.523964 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.524267 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.525362 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.525419 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.525436 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.525456 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.525468 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.540326 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.551523 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.567618 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.581845 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.595149 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.607189 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.624049 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.627304 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.627338 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.627349 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.627364 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.627374 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.636478 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.645525 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.656088 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.664979 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.673370 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.681699 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.691184 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.702541 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.712174 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.728970 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.729280 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.729303 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.729313 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.729331 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.729341 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.748987 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.762790 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.831704 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.831739 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.831749 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.831763 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.831773 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.919666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.919716 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.919728 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.919746 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.919757 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: E1123 06:51:43.933640 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.937305 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.937329 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.937337 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.937353 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.937364 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: E1123 06:51:43.949926 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.953134 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.953157 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.953165 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.953178 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.953187 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: E1123 06:51:43.964470 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.971025 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.971081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.971092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.971107 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.971118 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:43 crc kubenswrapper[5028]: E1123 06:51:43.985245 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.988189 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.988217 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.988228 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.988245 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:43 crc kubenswrapper[5028]: I1123 06:51:43.988256 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:43Z","lastTransitionTime":"2025-11-23T06:51:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: E1123 06:51:44.000440 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: E1123 06:51:44.000552 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.001802 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.001830 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.001839 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.001875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.001886 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.052574 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.052638 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:44 crc kubenswrapper[5028]: E1123 06:51:44.052701 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:44 crc kubenswrapper[5028]: E1123 06:51:44.052820 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.104739 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.104775 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.104785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.104798 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.104808 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.207004 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.207048 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.207059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.207073 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.207082 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.309160 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.309197 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.309206 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.309220 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.309229 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.411092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.411127 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.411139 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.411155 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.411164 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.513034 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.513081 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.513098 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.513117 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.513129 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.527544 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/3.log" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.528171 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/2.log" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.530653 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" exitCode=1 Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.530708 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.530746 5028 scope.go:117] "RemoveContainer" containerID="3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.531558 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 06:51:44 crc kubenswrapper[5028]: E1123 06:51:44.531739 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.557926 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.572886 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.585776 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.600417 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.616096 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.616146 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.616160 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.616179 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.616193 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.616221 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.631019 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.644523 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.660876 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.674520 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.686594 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.700902 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.713605 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.718721 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.718780 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.718798 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.718821 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.718839 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.725490 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.739685 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.755486 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b0a9c0678d87825c384a1ad20b8a2bb7500c23ff559bf3a5b6dcbfae30fe33c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:11Z\\\",\\\"message\\\":\\\"hift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-ovn-kubernetes/ovnkube-node-xbtxp openshift-dns/node-resolver-678pf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2 openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-th92p openshift-multus/multus-m2sl7 openshift-etcd/etcd-crc]\\\\nI1123 06:51:11.398169 6674 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1123 06:51:11.398180 6674 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398191 6674 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398197 6674 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1123 06:51:11.398201 6674 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1123 06:51:11.398205 6674 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI1123 06:51:11.398218 6674 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1123 06:51:11.398262 6674 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"3 06:51:43.818659 7085 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1123 06:51:43.818660 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-th92p\\\\nF1123 06:51:43.818668 7085 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:51:43.818673 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-7l9fm\\\\nI1123 06:51:43.818678 7085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.770268 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.780695 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.791387 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.802699 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:44Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.821473 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.821516 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.821530 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.821551 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.821560 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.923501 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.923540 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.923550 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.923564 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:44 crc kubenswrapper[5028]: I1123 06:51:44.923573 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:44Z","lastTransitionTime":"2025-11-23T06:51:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.026772 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.026826 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.026842 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.026864 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.026881 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.052577 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.052650 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:45 crc kubenswrapper[5028]: E1123 06:51:45.052727 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:45 crc kubenswrapper[5028]: E1123 06:51:45.052782 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.129402 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.129437 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.129446 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.129461 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.129470 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.231799 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.231834 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.231847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.231863 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.231875 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.334662 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.334706 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.334721 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.334739 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.334751 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.437501 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.437546 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.437559 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.437578 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.437592 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.541673 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/3.log" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.542517 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.542569 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.542587 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.542610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.542629 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.546325 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 06:51:45 crc kubenswrapper[5028]: E1123 06:51:45.546473 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.568124 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.583434 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.598724 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.616616 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.632218 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.645757 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.645786 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.645795 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.645809 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.645837 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.646538 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.660483 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.672670 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.687203 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.715262 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"3 06:51:43.818659 7085 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1123 06:51:43.818660 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-th92p\\\\nF1123 06:51:43.818668 7085 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:51:43.818673 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-7l9fm\\\\nI1123 06:51:43.818678 7085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.731745 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.743183 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.747969 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.748015 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.748030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.748048 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.748063 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.752462 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.762729 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.785442 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.800100 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.813087 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.827429 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.840453 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:45Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.850225 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.850272 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.850283 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.850297 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.850304 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.953067 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.953103 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.953114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.953130 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:45 crc kubenswrapper[5028]: I1123 06:51:45.953142 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:45Z","lastTransitionTime":"2025-11-23T06:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.052355 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:46 crc kubenswrapper[5028]: E1123 06:51:46.052486 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.052369 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:46 crc kubenswrapper[5028]: E1123 06:51:46.052656 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.055176 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.055202 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.055211 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.055221 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.055229 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.157844 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.158165 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.158266 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.158345 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.158411 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.261330 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.261377 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.261388 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.261407 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.261421 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.363432 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.363474 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.363489 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.363508 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.363522 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.466272 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.466318 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.466329 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.466350 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.466360 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.569283 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.569319 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.569330 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.569347 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.569359 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.671891 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.672237 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.672249 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.672265 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.672276 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.774915 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.774982 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.774999 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.775020 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.775032 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.877667 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.877712 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.877721 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.877739 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.877749 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.980322 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.980454 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.980471 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.980493 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:46 crc kubenswrapper[5028]: I1123 06:51:46.980508 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:46Z","lastTransitionTime":"2025-11-23T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.052647 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.052773 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:47 crc kubenswrapper[5028]: E1123 06:51:47.052890 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:47 crc kubenswrapper[5028]: E1123 06:51:47.053037 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.067526 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.082945 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.083224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.083250 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.083259 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.083273 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.083284 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.101231 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.114825 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.127716 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.159623 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.175195 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.186129 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.186162 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.186171 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.186186 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.186195 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.189918 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.203616 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.216240 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.228556 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.246444 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.261541 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.282759 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.288228 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.288278 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.288291 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.288306 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.288319 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.303873 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"3 06:51:43.818659 7085 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1123 06:51:43.818660 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-th92p\\\\nF1123 06:51:43.818668 7085 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:51:43.818673 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-7l9fm\\\\nI1123 06:51:43.818678 7085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.322576 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.337311 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.357266 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.373152 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:47Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.391395 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.391437 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.391448 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.391469 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.391482 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.493714 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.493755 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.493767 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.493794 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.493816 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.596248 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.596286 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.596296 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.596314 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.596325 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.699018 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.699047 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.699056 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.699070 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.699079 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.801484 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.801524 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.801535 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.801549 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.801559 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.904780 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.904832 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.904851 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.904877 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:47 crc kubenswrapper[5028]: I1123 06:51:47.904894 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:47Z","lastTransitionTime":"2025-11-23T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.007783 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.007824 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.007835 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.007848 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.007857 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.052865 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.052893 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:48 crc kubenswrapper[5028]: E1123 06:51:48.053147 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:48 crc kubenswrapper[5028]: E1123 06:51:48.053233 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.110194 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.110236 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.110247 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.110262 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.110271 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.212438 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.212473 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.212483 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.212499 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.212510 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.315481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.315525 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.315537 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.315558 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.315571 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.418785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.418821 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.418832 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.418848 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.418857 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.520817 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.520847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.520859 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.520873 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.520884 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.624243 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.624280 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.624292 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.624307 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.624318 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.726810 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.726853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.726865 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.726881 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.726892 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.829190 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.829232 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.829245 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.829259 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.829270 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.931790 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.931830 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.931841 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.931853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:48 crc kubenswrapper[5028]: I1123 06:51:48.931862 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:48Z","lastTransitionTime":"2025-11-23T06:51:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.034999 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.035040 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.035051 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.035070 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.035083 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.052612 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.052685 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:49 crc kubenswrapper[5028]: E1123 06:51:49.052781 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:49 crc kubenswrapper[5028]: E1123 06:51:49.052843 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.138998 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.139058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.139077 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.139105 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.139125 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.242536 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.242575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.242588 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.242605 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.242616 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.345937 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.346054 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.346079 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.346110 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.346135 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.449317 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.449366 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.449384 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.449404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.449417 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.552501 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.553481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.553659 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.553857 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.554093 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.657444 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.657544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.657564 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.657588 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.657604 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.760323 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.760420 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.760475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.760499 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.760515 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.863595 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.863663 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.863686 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.863714 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.863733 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.966570 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.966654 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.966679 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.966725 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:49 crc kubenswrapper[5028]: I1123 06:51:49.966752 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:49Z","lastTransitionTime":"2025-11-23T06:51:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.052085 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.052255 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:50 crc kubenswrapper[5028]: E1123 06:51:50.052436 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:50 crc kubenswrapper[5028]: E1123 06:51:50.052559 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.069030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.069058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.069070 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.069086 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.069097 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.172363 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.172410 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.172422 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.172440 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.172451 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.275735 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.275785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.275801 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.275821 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.275834 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.379156 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.379401 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.379418 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.379438 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.379452 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.482461 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.482503 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.482512 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.482527 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.482536 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.590655 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.590756 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.590774 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.590834 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.590853 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.694224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.694301 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.694326 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.694360 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.694383 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.797010 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.797046 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.797058 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.797074 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.797085 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.900532 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.900563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.900572 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.900584 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:50 crc kubenswrapper[5028]: I1123 06:51:50.900592 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:50Z","lastTransitionTime":"2025-11-23T06:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.003187 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.003234 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.003251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.003274 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.003290 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.053131 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:51 crc kubenswrapper[5028]: E1123 06:51:51.053287 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.053407 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:51 crc kubenswrapper[5028]: E1123 06:51:51.053574 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.107131 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.107375 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.107435 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.107493 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.107553 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.210509 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.210548 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.210559 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.210575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.210588 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.313733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.313786 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.313801 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.313822 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.313834 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.417086 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.417143 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.417155 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.417175 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.417186 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.520146 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.520215 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.520236 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.520261 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.520280 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.623263 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.623312 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.623324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.623341 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.623356 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.727455 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.727550 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.727567 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.727590 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.727607 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.830694 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.830761 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.830785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.830814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.830832 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.933481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.933551 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.933569 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.933599 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:51 crc kubenswrapper[5028]: I1123 06:51:51.933618 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:51Z","lastTransitionTime":"2025-11-23T06:51:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.038053 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.038121 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.038138 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.038164 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.038183 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.052504 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.052528 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:52 crc kubenswrapper[5028]: E1123 06:51:52.052749 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:52 crc kubenswrapper[5028]: E1123 06:51:52.052861 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.140378 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.140451 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.140464 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.140481 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.140493 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.242930 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.243003 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.243046 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.243067 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.243083 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.345065 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.345115 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.345126 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.345143 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.345158 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.448347 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.448421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.448439 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.448466 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.448486 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.551643 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.551685 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.551694 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.551708 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.551717 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.654148 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.654214 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.654224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.654248 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.654260 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.756282 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.756323 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.756331 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.756345 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.756354 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.859050 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.859111 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.859128 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.859151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.859168 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.961576 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.961622 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.961633 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.961654 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:52 crc kubenswrapper[5028]: I1123 06:51:52.961666 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:52Z","lastTransitionTime":"2025-11-23T06:51:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.052743 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:53 crc kubenswrapper[5028]: E1123 06:51:53.052888 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.052971 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:53 crc kubenswrapper[5028]: E1123 06:51:53.053123 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.063363 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.063414 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.063426 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.063445 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.063458 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.165901 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.165940 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.165964 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.165979 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.165988 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.269255 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.269290 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.269318 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.269333 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.269343 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.372821 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.372883 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.372907 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.372936 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.372991 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.475926 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.475994 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.476007 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.476030 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.476047 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.578428 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.578465 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.578475 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.578492 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.578503 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.680609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.680642 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.680650 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.680663 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.680672 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.783147 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.783211 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.783228 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.783253 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.783270 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.886158 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.886230 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.886256 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.886287 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.886311 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.990122 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.990200 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.990232 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.990260 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:53 crc kubenswrapper[5028]: I1123 06:51:53.990280 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:53Z","lastTransitionTime":"2025-11-23T06:51:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.052759 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.052804 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.052897 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.053037 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.093550 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.093595 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.093607 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.093626 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.093637 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.164829 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.164874 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.164891 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.164912 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.164926 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.179149 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.184003 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.184051 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.184078 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.184104 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.184123 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.198519 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.202699 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.202735 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.202748 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.202764 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.202774 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.217613 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.222533 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.222575 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.222590 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.222610 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.222624 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.235666 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.239986 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.240033 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.240050 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.240071 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.240087 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.256809 5028 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ff061f1e-f458-4bca-a72d-af8aa57016f2\\\",\\\"systemUUID\\\":\\\"fc0a1b0a-26b0-4c3e-92d4-29192e43f43f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:54Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:54 crc kubenswrapper[5028]: E1123 06:51:54.256920 5028 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.258868 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.258905 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.258922 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.258944 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.258978 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.361435 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.361487 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.361507 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.361531 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.361548 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.465657 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.465696 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.465707 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.465724 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.465735 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.569062 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.569119 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.569132 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.569150 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.569189 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.672506 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.672547 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.672560 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.672580 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.672591 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.776139 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.776218 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.776242 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.776274 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.776297 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.879150 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.879196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.879207 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.879223 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.879234 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.982301 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.982369 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.982386 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.982412 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:54 crc kubenswrapper[5028]: I1123 06:51:54.982430 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:54Z","lastTransitionTime":"2025-11-23T06:51:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.052255 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.052355 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:55 crc kubenswrapper[5028]: E1123 06:51:55.052481 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:55 crc kubenswrapper[5028]: E1123 06:51:55.052750 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.085417 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.085484 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.085509 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.085544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.085573 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.188798 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.188860 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.188880 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.188908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.188927 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.292400 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.292480 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.292499 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.292529 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.292551 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.426410 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.426486 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.426507 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.426537 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.426559 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.530445 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.530515 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.530533 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.530558 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.530578 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.633791 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.633847 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.633865 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.633888 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.633903 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.736696 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.736735 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.736747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.736765 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.736776 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.839609 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.839686 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.839714 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.839747 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.839767 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.942330 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.942370 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.942381 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.942399 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:55 crc kubenswrapper[5028]: I1123 06:51:55.942410 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:55Z","lastTransitionTime":"2025-11-23T06:51:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.045285 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.045343 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.045355 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.045377 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.045389 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.052673 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.052731 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:56 crc kubenswrapper[5028]: E1123 06:51:56.052989 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:56 crc kubenswrapper[5028]: E1123 06:51:56.053169 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.149327 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.149419 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.149442 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.149472 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.149491 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.253271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.253317 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.253329 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.253347 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.253358 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.355899 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.355982 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.355999 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.356024 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.356039 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.458346 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.458384 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.458396 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.458411 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.458420 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.561295 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.561380 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.561424 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.561466 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.561493 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.666033 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.666073 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.666085 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.666103 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.666112 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.769563 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.769649 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.769666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.769695 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.769719 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.873837 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.873884 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.873894 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.873908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.873917 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.977014 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.977075 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.977096 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.977120 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:56 crc kubenswrapper[5028]: I1123 06:51:56.977137 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:56Z","lastTransitionTime":"2025-11-23T06:51:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.052472 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.052492 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:57 crc kubenswrapper[5028]: E1123 06:51:57.052726 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:57 crc kubenswrapper[5028]: E1123 06:51:57.053968 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.064402 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-678pf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34c0e27d-8812-4054-83c4-eca66db0655e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d097da4fa5af12f5d862b819fcfda94391fb44cdbe535c2ba2cbc5997b576e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zb22n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:41Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-678pf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.076393 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-m2sl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e634c65f-8585-4d5d-b929-b9e1255f8921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:30Z\\\",\\\"message\\\":\\\"2025-11-23T06:50:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce\\\\n2025-11-23T06:50:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b4807c00-3757-41ff-b4b9-a728b459f9ce to /host/opt/cni/bin/\\\\n2025-11-23T06:50:45Z [verbose] multus-daemon started\\\\n2025-11-23T06:50:45Z [verbose] Readiness Indicator file check\\\\n2025-11-23T06:51:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-whssm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-m2sl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.080115 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.080149 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.080159 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.080194 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.080205 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.097225 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68dc0fb8-309c-46ef-a4f8-f0eff3169061\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-23T06:51:43Z\\\",\\\"message\\\":\\\"3 06:51:43.818659 7085 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI1123 06:51:43.818660 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-machine-config-operator/machine-config-daemon-th92p\\\\nF1123 06:51:43.818668 7085 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:43Z is after 2025-08-24T17:21:41Z]\\\\nI1123 06:51:43.818673 7085 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-7l9fm\\\\nI1123 06:51:43.818678 7085 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/mac\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:51:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdsqn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xbtxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.126795 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5609ffb8-6ac2-4716-8c08-c466b3dd987b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9570343c4d38edb6ae28c9d43dd785bc82b4214bb45f1436155ca7395a78d216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e35ebcb16905bafaaa0fef1fab0ef4a50b826c38cb43231977ff6f4b4f5437ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32fd2babb6173dedee2e4bf8c0d01fc1532d19067646fb45cb8f9ba0d8fad0db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763ed2c20d7624a128e91c560521240d0c358b941007d7121d9027d1a76e3b13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://368edde6874d88e8a5d675a2d213da96123dfc9726d85e00d93d38851a5f84b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6b5b36c03381fc5385f583e6551c9bd32e55a6a5a40b279b7c89c1b1409e48d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7476c537d63545e188bb20229ae0932077e17b9ede90fb49299e66bcbcf6ff9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmx9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-7l9fm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.143931 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bfed01d0-dd8f-478d-991f-4a9242b1c2be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wrpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5ft9z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.164455 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3447b712-fe62-45f5-9a32-f3db403145b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0894d9184090a91bbad533f479119a898fda42dfa27d25259764a76f1c5a4f21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb4783be5816f7304cee81453b5bcda22a5a8b39d6755e5900002dff2eae927e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://204f047aaaab369a625e233bc338064d07649e582032319fe7fc761647f94b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f027b746e47762038332c54dc1ce60dbac181a557ee5e51949041c5fb24c3178\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe432bf78cbe5bcdeae8b279cafe6ad56fd8ca8faff8b17edfa5a4f9af0852c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-23T06:50:30Z\\\",\\\"message\\\":\\\"W1123 06:50:20.260016 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1123 06:50:20.260282 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763880620 cert, and key in /tmp/serving-cert-3598931395/serving-signer.crt, /tmp/serving-cert-3598931395/serving-signer.key\\\\nI1123 06:50:20.445091 1 observer_polling.go:159] Starting file observer\\\\nW1123 06:50:20.447792 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1123 06:50:20.448111 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1123 06:50:20.450773 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3598931395/tls.crt::/tmp/serving-cert-3598931395/tls.key\\\\\\\"\\\\nF1123 06:50:30.834826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67ec945fc150ccefa744b99fc17f8a0e125122a1468e5ce7659f38726b30ffd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a237ebd11699fa82eb68fcb6a95bf7e4111216f60831953b212924aeaf6b470\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.184711 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.184768 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.184787 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.184814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.184834 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.187115 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:40Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fef765c66b8e6e6cbbb25282d1c9a96358ca09f29da6f1f2dbb2473b211ee781\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.206079 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de3a57f7-7066-475e-97cc-56a930cd7126\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbd13c882e65b2ffc3fb6129a9686f04ffcf9e897baf61a13e9038268cbc8f52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a6c94303f363dad48ecdeca2eed87187e829c81af94b8012f1bb1870e936c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.222662 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2326ae5b-2300-40a4-ae87-be0b1b781af6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efcf64f421374cf30bc55933b9ba5850f6845c6b701bb862574a62ae4f40ada6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75d6bc94e752fe4c62c96a976a460ed800fb0da6f5b31d18bb31035eb89fd470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lmtk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5dbq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.240151 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"930b43ed-a9fd-43ec-8e2c-fb40a2f705e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:51:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308d1f8afe8e7141f0b4222b6086a6b81f540664c23fa331e0577910db3ff00e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d351ab0852a29b629986f8a5ede755c49c2db8238b03c6f5c55c7f78b4b74c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f007c486e29aceb956c9966d000b8ec3191a61585aa84e1ec258b50ad65900e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://080b5fe795ff807802c7a00f6bd2ce6444977a2e2996458f4e62f8f82a47a08f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.256729 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4bfd48e77a9b4aae8175d0cc090ee81634fc28b62cb2d35dacb7a119bbe8374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.269652 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-w2dj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ab6d996-d9c7-42c7-8d70-00f3575144b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63808fd0a52d143c867d18946dd39970c1afcccf164b2c7bfad095474a366992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qbtrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-w2dj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.288260 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.288314 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.288326 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.288345 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.288360 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.304584 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4d42cf1-cc80-47fb-8e80-6b56a4a5a9ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a254551b29db30d19b3caa26ac8c461465c89d990f06c5ca38706923eecf742\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a282ab5c3f3e84a3031bb40eda2ad965464df094c89dc195f6ed43c0aa2dcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01a5f1b2483de0c926c8964682fa95b327a6c3a884d5ca27cddf26943a0c9529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://885209b7b5c01be3b2cf4da31ca79c281f3367bf97510578fa1963bdefc215af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05f5e30f3814ccafea0bc3bc64da7ade1bc9bcc810c8e842f7b809eead9b1a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4f2893aac35a3e6f4e36c7daa115481161b672f6c89acf057ad34d9fec4998a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3074241d319beeeedef24c59443813ce88edce50ce91e8cf0fc902d0e1430f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3235b39878b4196d2963d0e84836ae442d09889491a2c06685989b9e9cbca475\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-23T06:50:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.328588 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11a7354b-21b3-4946-afdc-c629483ca020\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1277d066d2446ff3b4c28d6bfb70c611be89cb1d908be0f5b6f077f26847ee59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd307ad59297a6fb924bd1e780ddc2f8d7c49e477daeaf731da532879292e162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f68994451e0a6521eb43631785a69bfb9a323a987f88465995f2dd62263f5ecd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505f1b7aac2bcd95db422f6281b525cf8e10dccefc2854c99fc29f0c50c12c5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.352281 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4363c7c99691fc766e77700cb677c50f263dc9969cd40472bf4cfa4ee3e82dee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efe0346c579e98750e50cfa1ac532ab6f070b7c738fc5f4a680a2b8ecdc8bd9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.374422 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.391047 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.391080 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.391090 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.391107 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.391118 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.392676 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa1c051a-31cd-4dd3-9be8-6194822c2273\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048246a12b761e47bec5a6bca85ca26acb4d427acd6b3910f2cc2d8fe873e233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-23T06:50:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hvg8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-23T06:50:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-th92p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.406846 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.418013 5028 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-23T06:50:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-23T06:51:57Z is after 2025-08-24T17:21:41Z" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.493814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.493841 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.493851 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.493865 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.493874 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.595547 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.595866 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.596023 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.596205 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.596266 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.699251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.699324 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.699349 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.699381 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.699409 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.802261 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.802319 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.802337 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.802364 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.802387 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.906180 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.906272 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.906304 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.906342 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:57 crc kubenswrapper[5028]: I1123 06:51:57.906363 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:57Z","lastTransitionTime":"2025-11-23T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.009391 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.009472 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.009498 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.009531 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.009555 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.052294 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.052346 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:51:58 crc kubenswrapper[5028]: E1123 06:51:58.052424 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:51:58 crc kubenswrapper[5028]: E1123 06:51:58.052623 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.112129 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.112208 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.112227 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.112252 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.112281 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.215761 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.215817 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.215839 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.215871 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.215890 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.319653 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.319716 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.319740 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.319771 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.319792 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.422934 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.423000 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.423011 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.423034 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.423049 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.526992 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.527073 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.527125 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.527170 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.527201 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.631450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.631503 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.631516 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.631538 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.631552 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.734733 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.734797 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.734813 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.734837 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.734854 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.838217 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.838268 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.838310 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.838334 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.838348 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.940923 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.941042 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.941069 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.941161 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:58 crc kubenswrapper[5028]: I1123 06:51:58.941188 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:58Z","lastTransitionTime":"2025-11-23T06:51:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.044781 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.044844 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.044867 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.044894 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.044913 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.052269 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.052450 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:51:59 crc kubenswrapper[5028]: E1123 06:51:59.052602 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:51:59 crc kubenswrapper[5028]: E1123 06:51:59.053242 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.147785 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.147853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.147878 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.147908 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.147940 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.251566 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.251637 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.251656 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.251689 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.251708 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.355352 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.355392 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.355404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.355422 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.355431 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.458092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.458192 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.458223 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.458256 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.458280 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.561252 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.561303 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.561316 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.561332 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.561344 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.664355 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.664432 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.664450 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.664480 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.664498 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.766649 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.766688 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.766700 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.766717 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.766730 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.869033 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.869079 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.869092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.869107 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.869116 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.971218 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.971271 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.971286 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.971310 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:51:59 crc kubenswrapper[5028]: I1123 06:51:59.971327 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:51:59Z","lastTransitionTime":"2025-11-23T06:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.052149 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.052176 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:00 crc kubenswrapper[5028]: E1123 06:52:00.052327 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:00 crc kubenswrapper[5028]: E1123 06:52:00.052445 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.075986 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.076039 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.076057 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.076080 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.076097 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.179645 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.179705 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.179727 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.179756 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.179777 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.283170 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.283279 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.283294 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.283314 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.283329 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.385841 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.385905 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.385929 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.385994 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.386021 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.486844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:00 crc kubenswrapper[5028]: E1123 06:52:00.487097 5028 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:52:00 crc kubenswrapper[5028]: E1123 06:52:00.487218 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs podName:bfed01d0-dd8f-478d-991f-4a9242b1c2be nodeName:}" failed. No retries permitted until 2025-11-23 06:53:04.487183708 +0000 UTC m=+168.184588557 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs") pod "network-metrics-daemon-5ft9z" (UID: "bfed01d0-dd8f-478d-991f-4a9242b1c2be") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.488726 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.488784 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.488810 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.488843 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.488866 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.593059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.593128 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.593152 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.593182 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.593208 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.695206 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.695263 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.695281 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.695304 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.695322 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.797739 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.797773 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.797783 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.797799 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.797809 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.900079 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.900123 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.900136 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.900155 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:00 crc kubenswrapper[5028]: I1123 06:52:00.900168 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:00Z","lastTransitionTime":"2025-11-23T06:52:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.003330 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.003404 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.003421 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.003452 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.003473 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.052243 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.052277 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:01 crc kubenswrapper[5028]: E1123 06:52:01.052540 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:01 crc kubenswrapper[5028]: E1123 06:52:01.052821 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.054124 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 06:52:01 crc kubenswrapper[5028]: E1123 06:52:01.054382 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.105814 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.105875 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.105900 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.105929 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.105982 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.210300 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.210361 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.210375 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.210396 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.210410 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.313977 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.314022 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.314034 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.314050 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.314060 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.416565 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.416625 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.416642 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.416666 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.416685 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.520200 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.520246 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.520262 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.520284 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.520306 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.623853 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.623930 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.623999 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.624061 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.624082 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.727002 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.727059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.727075 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.727096 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.727115 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.830163 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.830209 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.830218 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.830234 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.830245 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.933296 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.933335 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.933346 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.933360 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:01 crc kubenswrapper[5028]: I1123 06:52:01.933372 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:01Z","lastTransitionTime":"2025-11-23T06:52:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.037224 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.037292 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.037309 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.037336 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.037354 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.052633 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.052669 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:02 crc kubenswrapper[5028]: E1123 06:52:02.052841 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:02 crc kubenswrapper[5028]: E1123 06:52:02.053483 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.141518 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.141569 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.141583 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.141603 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.141616 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.246139 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.246196 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.246214 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.246322 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.246353 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.349852 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.349934 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.350214 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.350279 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.350303 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.453658 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.453727 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.453745 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.453774 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.453794 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.557063 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.557119 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.557131 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.557150 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.557163 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.661092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.661160 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.661182 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.661208 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.661226 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.765269 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.765336 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.765356 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.765380 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.765399 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.869092 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.869141 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.869161 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.869184 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.869201 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.974038 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.974114 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.974137 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.974169 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:02 crc kubenswrapper[5028]: I1123 06:52:02.974194 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:02Z","lastTransitionTime":"2025-11-23T06:52:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.052301 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.052467 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:03 crc kubenswrapper[5028]: E1123 06:52:03.053408 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:03 crc kubenswrapper[5028]: E1123 06:52:03.053609 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.077429 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.077477 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.077491 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.077511 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.077528 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.180095 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.180138 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.180151 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.180173 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.180190 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.283670 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.283743 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.283762 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.283794 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.283815 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.387075 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.387153 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.387168 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.387197 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.387215 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.491129 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.491211 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.491231 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.491259 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.491280 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.594544 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.594611 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.594621 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.594641 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.594652 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.704026 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.704111 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.704126 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.704152 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.704164 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.807028 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.807099 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.807119 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.807203 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.807237 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.911200 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.911340 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.911362 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.911389 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:03 crc kubenswrapper[5028]: I1123 06:52:03.911405 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:03Z","lastTransitionTime":"2025-11-23T06:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.015253 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.015330 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.015349 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.015379 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.015402 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.052234 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.052287 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:04 crc kubenswrapper[5028]: E1123 06:52:04.052463 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:04 crc kubenswrapper[5028]: E1123 06:52:04.052638 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.119546 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.119620 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.119665 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.119709 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.119737 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.222985 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.223059 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.223087 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.223128 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.223157 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.327094 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.327146 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.327157 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.327175 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.327188 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.429353 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.429389 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.429397 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.429425 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.429439 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.532207 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.532251 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.532263 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.532285 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.532298 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.634349 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.634463 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.634487 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.634516 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.634532 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.662822 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.663202 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.663297 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.663332 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.663410 5028 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-23T06:52:04Z","lastTransitionTime":"2025-11-23T06:52:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.693496 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7"] Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.693976 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.697306 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.698614 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.700335 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.701284 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.720731 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.720703666 podStartE2EDuration="1m28.720703666s" podCreationTimestamp="2025-11-23 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.720087051 +0000 UTC m=+108.417491930" watchObservedRunningTime="2025-11-23 06:52:04.720703666 +0000 UTC m=+108.418108485" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.769433 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-m2sl7" podStartSLOduration=82.769406491 podStartE2EDuration="1m22.769406491s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.769078213 +0000 UTC m=+108.466482992" watchObservedRunningTime="2025-11-23 06:52:04.769406491 +0000 UTC m=+108.466811310" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.769831 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-678pf" podStartSLOduration=83.769821802 podStartE2EDuration="1m23.769821802s" podCreationTimestamp="2025-11-23 06:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.751832217 +0000 UTC m=+108.449236996" watchObservedRunningTime="2025-11-23 06:52:04.769821802 +0000 UTC m=+108.467226621" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.839150 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.839241 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.839287 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.839620 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.839711 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.855594 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-7l9fm" podStartSLOduration=82.855574843 podStartE2EDuration="1m22.855574843s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.842369516 +0000 UTC m=+108.539774305" watchObservedRunningTime="2025-11-23 06:52:04.855574843 +0000 UTC m=+108.552979622" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.903563 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.90354331 podStartE2EDuration="39.90354331s" podCreationTimestamp="2025-11-23 06:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.880880329 +0000 UTC m=+108.578285128" watchObservedRunningTime="2025-11-23 06:52:04.90354331 +0000 UTC m=+108.600948079" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.904034 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5dbq2" podStartSLOduration=82.904029992 podStartE2EDuration="1m22.904029992s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.903776786 +0000 UTC m=+108.601181575" watchObservedRunningTime="2025-11-23 06:52:04.904029992 +0000 UTC m=+108.601434771" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.937329 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=88.937315365 podStartE2EDuration="1m28.937315365s" podCreationTimestamp="2025-11-23 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.935120221 +0000 UTC m=+108.632525000" watchObservedRunningTime="2025-11-23 06:52:04.937315365 +0000 UTC m=+108.634720144" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940574 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940619 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940638 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940663 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940708 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940761 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.940775 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.941657 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.949113 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.950358 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=83.950341648 podStartE2EDuration="1m23.950341648s" podCreationTimestamp="2025-11-23 06:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.950066381 +0000 UTC m=+108.647471170" watchObservedRunningTime="2025-11-23 06:52:04.950341648 +0000 UTC m=+108.647746427" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.963659 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.963642547 podStartE2EDuration="51.963642547s" podCreationTimestamp="2025-11-23 06:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.963638287 +0000 UTC m=+108.661043066" watchObservedRunningTime="2025-11-23 06:52:04.963642547 +0000 UTC m=+108.661047326" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.969395 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7jlp7\" (UID: \"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:04 crc kubenswrapper[5028]: I1123 06:52:04.991503 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-w2dj6" podStartSLOduration=83.991482766 podStartE2EDuration="1m23.991482766s" podCreationTimestamp="2025-11-23 06:50:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:04.991206469 +0000 UTC m=+108.688611258" watchObservedRunningTime="2025-11-23 06:52:04.991482766 +0000 UTC m=+108.688887545" Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.011101 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.052071 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.052376 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:05 crc kubenswrapper[5028]: E1123 06:52:05.052492 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:05 crc kubenswrapper[5028]: E1123 06:52:05.052731 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.075889 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podStartSLOduration=83.075869884 podStartE2EDuration="1m23.075869884s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:05.075553546 +0000 UTC m=+108.772958325" watchObservedRunningTime="2025-11-23 06:52:05.075869884 +0000 UTC m=+108.773274663" Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.614293 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" event={"ID":"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7","Type":"ContainerStarted","Data":"6fdc5d98d47d46a3b690731ceb9ac1f55a78f8f1e5b537e19b3c5c51615fc557"} Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.614356 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" event={"ID":"8b68e5e8-6d56-4bbb-9f70-d660e2bce9d7","Type":"ContainerStarted","Data":"3c11f2a15d9511165ea4eb5da3195cb74e261f2a60ab6b0f85e54775a4d8c7dd"} Nov 23 06:52:05 crc kubenswrapper[5028]: I1123 06:52:05.635297 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7jlp7" podStartSLOduration=83.635266156 podStartE2EDuration="1m23.635266156s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:05.634757514 +0000 UTC m=+109.332162293" watchObservedRunningTime="2025-11-23 06:52:05.635266156 +0000 UTC m=+109.332670975" Nov 23 06:52:06 crc kubenswrapper[5028]: I1123 06:52:06.052597 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:06 crc kubenswrapper[5028]: I1123 06:52:06.052610 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:06 crc kubenswrapper[5028]: E1123 06:52:06.052783 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:06 crc kubenswrapper[5028]: E1123 06:52:06.052969 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:07 crc kubenswrapper[5028]: I1123 06:52:07.052559 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:07 crc kubenswrapper[5028]: I1123 06:52:07.052559 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:07 crc kubenswrapper[5028]: E1123 06:52:07.054572 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:07 crc kubenswrapper[5028]: E1123 06:52:07.054692 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:08 crc kubenswrapper[5028]: I1123 06:52:08.052437 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:08 crc kubenswrapper[5028]: I1123 06:52:08.052476 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:08 crc kubenswrapper[5028]: E1123 06:52:08.052985 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:08 crc kubenswrapper[5028]: E1123 06:52:08.053162 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:09 crc kubenswrapper[5028]: I1123 06:52:09.053151 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:09 crc kubenswrapper[5028]: I1123 06:52:09.053241 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:09 crc kubenswrapper[5028]: E1123 06:52:09.053414 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:09 crc kubenswrapper[5028]: E1123 06:52:09.053575 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:10 crc kubenswrapper[5028]: I1123 06:52:10.052834 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:10 crc kubenswrapper[5028]: I1123 06:52:10.052933 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:10 crc kubenswrapper[5028]: E1123 06:52:10.053021 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:10 crc kubenswrapper[5028]: E1123 06:52:10.053246 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:11 crc kubenswrapper[5028]: I1123 06:52:11.052250 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:11 crc kubenswrapper[5028]: E1123 06:52:11.052461 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:11 crc kubenswrapper[5028]: I1123 06:52:11.052883 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:11 crc kubenswrapper[5028]: E1123 06:52:11.053136 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:12 crc kubenswrapper[5028]: I1123 06:52:12.053076 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:12 crc kubenswrapper[5028]: I1123 06:52:12.053256 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:12 crc kubenswrapper[5028]: E1123 06:52:12.053303 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:12 crc kubenswrapper[5028]: E1123 06:52:12.053683 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:13 crc kubenswrapper[5028]: I1123 06:52:13.052343 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:13 crc kubenswrapper[5028]: I1123 06:52:13.052394 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:13 crc kubenswrapper[5028]: E1123 06:52:13.052612 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:13 crc kubenswrapper[5028]: E1123 06:52:13.052748 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:14 crc kubenswrapper[5028]: I1123 06:52:14.052916 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:14 crc kubenswrapper[5028]: I1123 06:52:14.053015 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:14 crc kubenswrapper[5028]: E1123 06:52:14.053269 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:14 crc kubenswrapper[5028]: E1123 06:52:14.053437 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:15 crc kubenswrapper[5028]: I1123 06:52:15.052092 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:15 crc kubenswrapper[5028]: I1123 06:52:15.052264 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:15 crc kubenswrapper[5028]: E1123 06:52:15.052933 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:15 crc kubenswrapper[5028]: E1123 06:52:15.053159 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:15 crc kubenswrapper[5028]: I1123 06:52:15.053397 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 06:52:15 crc kubenswrapper[5028]: E1123 06:52:15.053691 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xbtxp_openshift-ovn-kubernetes(68dc0fb8-309c-46ef-a4f8-f0eff3169061)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.052377 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.052417 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:16 crc kubenswrapper[5028]: E1123 06:52:16.052517 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:16 crc kubenswrapper[5028]: E1123 06:52:16.052631 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.657772 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/1.log" Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.658389 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/0.log" Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.658430 5028 generic.go:334] "Generic (PLEG): container finished" podID="e634c65f-8585-4d5d-b929-b9e1255f8921" containerID="f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad" exitCode=1 Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.658466 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerDied","Data":"f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad"} Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.658514 5028 scope.go:117] "RemoveContainer" containerID="34ef8772edc2d95921e37010f613cd9fde6fe42450c72edc09666423112b5321" Nov 23 06:52:16 crc kubenswrapper[5028]: I1123 06:52:16.659464 5028 scope.go:117] "RemoveContainer" containerID="f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad" Nov 23 06:52:16 crc kubenswrapper[5028]: E1123 06:52:16.659853 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-m2sl7_openshift-multus(e634c65f-8585-4d5d-b929-b9e1255f8921)\"" pod="openshift-multus/multus-m2sl7" podUID="e634c65f-8585-4d5d-b929-b9e1255f8921" Nov 23 06:52:17 crc kubenswrapper[5028]: E1123 06:52:17.039061 5028 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 23 06:52:17 crc kubenswrapper[5028]: I1123 06:52:17.053000 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:17 crc kubenswrapper[5028]: I1123 06:52:17.053018 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:17 crc kubenswrapper[5028]: E1123 06:52:17.055757 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:17 crc kubenswrapper[5028]: E1123 06:52:17.056041 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:17 crc kubenswrapper[5028]: E1123 06:52:17.168073 5028 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:52:17 crc kubenswrapper[5028]: I1123 06:52:17.663616 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/1.log" Nov 23 06:52:18 crc kubenswrapper[5028]: I1123 06:52:18.052659 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:18 crc kubenswrapper[5028]: I1123 06:52:18.052768 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:18 crc kubenswrapper[5028]: E1123 06:52:18.052817 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:18 crc kubenswrapper[5028]: E1123 06:52:18.052988 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:19 crc kubenswrapper[5028]: I1123 06:52:19.052583 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:19 crc kubenswrapper[5028]: I1123 06:52:19.052611 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:19 crc kubenswrapper[5028]: E1123 06:52:19.052849 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:19 crc kubenswrapper[5028]: E1123 06:52:19.053134 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:20 crc kubenswrapper[5028]: I1123 06:52:20.052098 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:20 crc kubenswrapper[5028]: I1123 06:52:20.052098 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:20 crc kubenswrapper[5028]: E1123 06:52:20.052261 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:20 crc kubenswrapper[5028]: E1123 06:52:20.052442 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:21 crc kubenswrapper[5028]: I1123 06:52:21.052855 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:21 crc kubenswrapper[5028]: I1123 06:52:21.053004 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:21 crc kubenswrapper[5028]: E1123 06:52:21.053296 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:21 crc kubenswrapper[5028]: E1123 06:52:21.053427 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:22 crc kubenswrapper[5028]: I1123 06:52:22.053166 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:22 crc kubenswrapper[5028]: I1123 06:52:22.053245 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:22 crc kubenswrapper[5028]: E1123 06:52:22.053371 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:22 crc kubenswrapper[5028]: E1123 06:52:22.053664 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:22 crc kubenswrapper[5028]: E1123 06:52:22.169812 5028 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:52:23 crc kubenswrapper[5028]: I1123 06:52:23.052696 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:23 crc kubenswrapper[5028]: E1123 06:52:23.052845 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:23 crc kubenswrapper[5028]: I1123 06:52:23.052860 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:23 crc kubenswrapper[5028]: E1123 06:52:23.053024 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:24 crc kubenswrapper[5028]: I1123 06:52:24.052593 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:24 crc kubenswrapper[5028]: I1123 06:52:24.052621 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:24 crc kubenswrapper[5028]: E1123 06:52:24.052847 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:24 crc kubenswrapper[5028]: E1123 06:52:24.052919 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:25 crc kubenswrapper[5028]: I1123 06:52:25.052645 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:25 crc kubenswrapper[5028]: E1123 06:52:25.052778 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:25 crc kubenswrapper[5028]: I1123 06:52:25.052654 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:25 crc kubenswrapper[5028]: E1123 06:52:25.053006 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:26 crc kubenswrapper[5028]: I1123 06:52:26.051994 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:26 crc kubenswrapper[5028]: I1123 06:52:26.052029 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:26 crc kubenswrapper[5028]: E1123 06:52:26.052124 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:26 crc kubenswrapper[5028]: E1123 06:52:26.052210 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:27 crc kubenswrapper[5028]: I1123 06:52:27.052315 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:27 crc kubenswrapper[5028]: I1123 06:52:27.052380 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:27 crc kubenswrapper[5028]: E1123 06:52:27.053421 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:27 crc kubenswrapper[5028]: E1123 06:52:27.053600 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:27 crc kubenswrapper[5028]: E1123 06:52:27.170585 5028 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.052338 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.052414 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:28 crc kubenswrapper[5028]: E1123 06:52:28.052538 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:28 crc kubenswrapper[5028]: E1123 06:52:28.052648 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.054000 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.701094 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/3.log" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.705073 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerStarted","Data":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.705582 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.732302 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podStartSLOduration=106.732281046 podStartE2EDuration="1m46.732281046s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:28.731298062 +0000 UTC m=+132.428702841" watchObservedRunningTime="2025-11-23 06:52:28.732281046 +0000 UTC m=+132.429685825" Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.920938 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5ft9z"] Nov 23 06:52:28 crc kubenswrapper[5028]: I1123 06:52:28.921091 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:28 crc kubenswrapper[5028]: E1123 06:52:28.921200 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:29 crc kubenswrapper[5028]: I1123 06:52:29.052512 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:29 crc kubenswrapper[5028]: I1123 06:52:29.054995 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:29 crc kubenswrapper[5028]: E1123 06:52:29.055241 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:29 crc kubenswrapper[5028]: E1123 06:52:29.055346 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:30 crc kubenswrapper[5028]: I1123 06:52:30.053612 5028 scope.go:117] "RemoveContainer" containerID="f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad" Nov 23 06:52:30 crc kubenswrapper[5028]: I1123 06:52:30.053715 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:30 crc kubenswrapper[5028]: E1123 06:52:30.053856 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:30 crc kubenswrapper[5028]: I1123 06:52:30.712024 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/1.log" Nov 23 06:52:30 crc kubenswrapper[5028]: I1123 06:52:30.712409 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerStarted","Data":"75d33e1dc0b68ad40438ab47e02f0cf419a600e603e43938a10adad0b49ac4a8"} Nov 23 06:52:31 crc kubenswrapper[5028]: I1123 06:52:31.053062 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:31 crc kubenswrapper[5028]: I1123 06:52:31.053133 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:31 crc kubenswrapper[5028]: I1123 06:52:31.053078 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:31 crc kubenswrapper[5028]: E1123 06:52:31.053279 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5ft9z" podUID="bfed01d0-dd8f-478d-991f-4a9242b1c2be" Nov 23 06:52:31 crc kubenswrapper[5028]: E1123 06:52:31.053381 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 23 06:52:31 crc kubenswrapper[5028]: E1123 06:52:31.053513 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 23 06:52:32 crc kubenswrapper[5028]: I1123 06:52:32.052337 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:32 crc kubenswrapper[5028]: E1123 06:52:32.052602 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.052419 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.052450 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.052508 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.056456 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.061242 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.061357 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.061595 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.061665 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 23 06:52:33 crc kubenswrapper[5028]: I1123 06:52:33.061780 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 23 06:52:34 crc kubenswrapper[5028]: I1123 06:52:34.052626 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.568830 5028 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.630831 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.631426 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.631506 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.631760 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.632755 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tcdk5"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.633386 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.633471 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7rpm6"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.634118 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.648025 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.648246 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.648449 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.650117 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q44fq"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.650329 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.650559 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.650650 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.651474 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.651664 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.651823 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652034 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652177 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652406 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.651857 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652417 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652755 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652701 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652847 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.652850 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.653161 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.653227 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.656757 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.657431 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.657461 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.657674 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.659386 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wfgw7"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.660315 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.661679 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ct4c7"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.662726 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.663141 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.672728 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.672874 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.673019 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.673127 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.673291 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.673671 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.673963 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.674329 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.674728 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.675029 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.675366 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hbwhl"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.676103 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-m4qn7"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.676194 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.676590 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.676991 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.678408 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.678591 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.682353 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vm29q"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.682884 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685154 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hb2pv"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685249 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685500 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685581 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685704 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685791 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685922 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.685923 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.686026 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.686142 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.686203 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.686294 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.692072 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.692154 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.692176 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.692248 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.692077 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.692280 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.693300 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.694472 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.695096 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.695507 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.695508 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.699252 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nvfz9"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.699837 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.705741 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.708243 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.708708 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mmks"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.709028 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.709408 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.709782 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.710199 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.721859 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.722671 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.722846 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.723461 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.724236 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.726185 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.729155 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.731065 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.737707 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.739863 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.751036 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.752326 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.755537 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.756625 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.757623 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760049 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760281 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760421 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760520 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760673 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760992 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.761770 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.762101 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.762251 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.762427 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.760465 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.762586 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.776036 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.776894 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.777224 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.778039 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.778664 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-57wj4"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.779210 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.779500 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.780310 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.781229 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.781490 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.781668 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.781757 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.781903 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.782060 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.782149 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.782616 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.782729 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.782861 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.783183 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.783267 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.783367 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.783460 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.784849 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.786994 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789391 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789537 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789598 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07b6179-c5bd-4735-b0a6-37f6c8d402df-config\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789639 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f07b6179-c5bd-4735-b0a6-37f6c8d402df-images\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789663 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f07b6179-c5bd-4735-b0a6-37f6c8d402df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789692 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6vk\" (UniqueName: \"kubernetes.io/projected/f07b6179-c5bd-4735-b0a6-37f6c8d402df-kube-api-access-8d6vk\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789617 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789827 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789720 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789931 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.789830 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.790183 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.790356 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.790449 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.790665 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.790864 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.791063 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.791517 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.791550 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.792741 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.792905 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.795172 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.796270 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.797449 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.801797 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.802053 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.803135 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.803820 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.808198 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.815987 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.820526 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.823800 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.827418 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-pppdd"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.828724 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.836697 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.843539 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.845110 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.848671 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.851470 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.854918 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.857254 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzg2c"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.859750 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.867922 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.868261 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.870143 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.870544 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.872102 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.874118 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.889595 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bxbmw"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.890475 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07b6179-c5bd-4735-b0a6-37f6c8d402df-config\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.890513 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f07b6179-c5bd-4735-b0a6-37f6c8d402df-images\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.890538 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f07b6179-c5bd-4735-b0a6-37f6c8d402df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.890564 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d6vk\" (UniqueName: \"kubernetes.io/projected/f07b6179-c5bd-4735-b0a6-37f6c8d402df-kube-api-access-8d6vk\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.890773 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.892084 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7rpm6"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.892181 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f07b6179-c5bd-4735-b0a6-37f6c8d402df-images\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.892998 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.893739 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.894259 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.894654 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f07b6179-c5bd-4735-b0a6-37f6c8d402df-config\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.896913 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tcdk5"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.898070 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f07b6179-c5bd-4735-b0a6-37f6c8d402df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.900419 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q44fq"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.902992 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nv45l"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.904041 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.904386 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wfgw7"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.905512 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ct4c7"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.908346 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mmks"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.909636 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hbwhl"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.910628 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hb2pv"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.911646 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.914340 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.916900 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.919819 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.921705 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nvfz9"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.924337 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.927976 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-j9tbl"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.928907 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.929446 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.931141 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.933757 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m4qn7"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.934094 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.935405 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.937732 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.939332 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.940479 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.946349 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.947538 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.948529 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.949622 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.950832 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzg2c"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.951812 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.953056 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pppdd"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.953811 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.953821 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.955038 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.956021 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.956995 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bxbmw"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.957967 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-57wj4"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.959054 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.960037 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.961044 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nv45l"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.962086 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jsktx"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.963443 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-dgqrn"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.963575 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.964029 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.964226 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jsktx"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.974657 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.976890 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dgqrn"] Nov 23 06:52:35 crc kubenswrapper[5028]: I1123 06:52:35.994231 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.014416 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.033333 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.053476 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.073808 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.093816 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.115701 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.134040 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.173184 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.193898 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.214006 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.233897 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.253054 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.273991 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.293775 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.313156 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.335398 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.354592 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.373634 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.393912 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.414367 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.434040 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.454395 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.473853 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.503922 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.513359 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.533851 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.554305 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.574247 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.593295 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.614597 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.634679 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.654688 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.694087 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.714527 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.734107 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.754626 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.774162 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.794898 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.812041 5028 request.go:700] Waited for 1.007514215s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.815053 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.834743 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.854825 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.875234 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.894819 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.914994 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.935816 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.955608 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.984015 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 23 06:52:36 crc kubenswrapper[5028]: I1123 06:52:36.994308 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.014867 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.035071 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.056016 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.074878 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.094307 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.114528 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.146245 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.153720 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.175553 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.193514 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.214049 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.255070 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.266365 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d6vk\" (UniqueName: \"kubernetes.io/projected/f07b6179-c5bd-4735-b0a6-37f6c8d402df-kube-api-access-8d6vk\") pod \"machine-api-operator-5694c8668f-ct4c7\" (UID: \"f07b6179-c5bd-4735-b0a6-37f6c8d402df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.275241 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.290580 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.297453 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.317023 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.334972 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.355085 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.375655 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.395260 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.417194 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.435634 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.455402 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.474794 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.494151 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.518590 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.534644 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.554162 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.569387 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ct4c7"] Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.574192 5028 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.593583 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.614387 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.634846 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.654690 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.675181 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.695411 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719571 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64fac0c1-4e23-48c0-a162-f77370e3497e-audit-dir\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719612 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da08adf-859a-4df3-84d6-842f8652b8c5-service-ca-bundle\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719642 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-tls\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719660 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-client\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719682 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719709 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-audit-policies\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.719868 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720142 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdpgh\" (UniqueName: \"kubernetes.io/projected/b45d44a5-1077-40b8-8faf-0b206cdac95b-kube-api-access-pdpgh\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720251 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-client-ca\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720306 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2dk\" (UniqueName: \"kubernetes.io/projected/9f8d8348-6865-401f-aa66-63404d4a2869-kube-api-access-gw2dk\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720348 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-etcd-client\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720411 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-config\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720442 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-dir\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720501 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppwf\" (UniqueName: \"kubernetes.io/projected/5301827d-cd7a-4382-a397-51e3e115e834-kube-api-access-fppwf\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720531 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhfw\" (UniqueName: \"kubernetes.io/projected/e3c4cf13-f6af-4121-9feb-653a6abd396a-kube-api-access-llhfw\") pod \"downloads-7954f5f757-m4qn7\" (UID: \"e3c4cf13-f6af-4121-9feb-653a6abd396a\") " pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720561 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-serving-cert\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720584 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-audit\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720606 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-etcd-serving-ca\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720670 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ba6016-7bd8-4ee0-9dd9-111f320e064f-serving-cert\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720694 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-ca\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720761 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-trusted-ca\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720851 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bdd8\" (UniqueName: \"kubernetes.io/projected/64fac0c1-4e23-48c0-a162-f77370e3497e-kube-api-access-2bdd8\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.720979 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-etcd-client\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721024 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721310 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-bound-sa-token\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721355 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/857ed9f1-ee3f-4d84-8945-71b7211bcf02-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721378 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9f8d8348-6865-401f-aa66-63404d4a2869-machine-approver-tls\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721402 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-service-ca-bundle\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721422 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-policies\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721442 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ad0fd40-348f-46f6-87f8-001fc9918495-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721460 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhff9\" (UniqueName: \"kubernetes.io/projected/857ed9f1-ee3f-4d84-8945-71b7211bcf02-kube-api-access-fhff9\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721480 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/483a94d2-3437-4165-a8c3-6a014b2dcea4-serving-cert\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721524 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721546 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f8d8348-6865-401f-aa66-63404d4a2869-auth-proxy-config\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721624 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721671 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2p4\" (UniqueName: \"kubernetes.io/projected/42ba6016-7bd8-4ee0-9dd9-111f320e064f-kube-api-access-td2p4\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721703 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.721884 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3baecb2d-3513-4920-a11b-18947bda4669-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722071 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-config\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722110 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvwb\" (UniqueName: \"kubernetes.io/projected/3da08adf-859a-4df3-84d6-842f8652b8c5-kube-api-access-svvwb\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: E1123 06:52:37.722188 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.222169234 +0000 UTC m=+141.919574023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722225 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722260 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722285 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-config\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722313 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722338 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3baecb2d-3513-4920-a11b-18947bda4669-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722365 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722387 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722412 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1cfb9270-8022-420c-b9e3-ecdfed90d6bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qlj5c\" (UID: \"1cfb9270-8022-420c-b9e3-ecdfed90d6bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722452 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-client-ca\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722474 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-service-ca\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722498 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722520 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3baecb2d-3513-4920-a11b-18947bda4669-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722543 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8d8348-6865-401f-aa66-63404d4a2869-config\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722636 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64fac0c1-4e23-48c0-a162-f77370e3497e-node-pullsecrets\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722692 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce1d92fe-ce3d-46df-8c5a-1896d7115fdd-metrics-tls\") pod \"dns-operator-744455d44c-nvfz9\" (UID: \"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722728 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6g8\" (UniqueName: \"kubernetes.io/projected/ce1d92fe-ce3d-46df-8c5a-1896d7115fdd-kube-api-access-7b6g8\") pod \"dns-operator-744455d44c-nvfz9\" (UID: \"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722751 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khjk9\" (UniqueName: \"kubernetes.io/projected/1cfb9270-8022-420c-b9e3-ecdfed90d6bf-kube-api-access-khjk9\") pod \"cluster-samples-operator-665b6dd947-qlj5c\" (UID: \"1cfb9270-8022-420c-b9e3-ecdfed90d6bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722818 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301827d-cd7a-4382-a397-51e3e115e834-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.722853 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301827d-cd7a-4382-a397-51e3e115e834-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723109 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-config\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723138 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723162 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwt6\" (UniqueName: \"kubernetes.io/projected/483a94d2-3437-4165-a8c3-6a014b2dcea4-kube-api-access-pjwt6\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723191 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-stats-auth\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723229 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-certificates\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723254 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-serving-cert\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723275 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-default-certificate\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723305 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/857ed9f1-ee3f-4d84-8945-71b7211bcf02-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723349 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-encryption-config\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723426 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz9lf\" (UniqueName: \"kubernetes.io/projected/3baecb2d-3513-4920-a11b-18947bda4669-kube-api-access-lz9lf\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723495 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723549 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-metrics-certs\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723607 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-config\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723643 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zcm4\" (UniqueName: \"kubernetes.io/projected/48e2ebf7-77fd-43a3-9e8b-d89458a00707-kube-api-access-7zcm4\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723690 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723719 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk94r\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-kube-api-access-kk94r\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723744 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcs8j\" (UniqueName: \"kubernetes.io/projected/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-kube-api-access-tcs8j\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723770 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b45d44a5-1077-40b8-8faf-0b206cdac95b-serving-cert\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723802 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ad0fd40-348f-46f6-87f8-001fc9918495-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723834 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-config\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723861 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-image-import-ca\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723892 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.723926 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.751599 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" event={"ID":"f07b6179-c5bd-4735-b0a6-37f6c8d402df","Type":"ContainerStarted","Data":"92d78d8c0154d92b2a79b2f79e0fb96d3e41419e3c3f2cf15d3caeef4bbfa475"} Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.751649 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" event={"ID":"f07b6179-c5bd-4735-b0a6-37f6c8d402df","Type":"ContainerStarted","Data":"8a9e0c346a91c6847dee860f2dc92cd6689bcd8a88f602a17390a2282f123d24"} Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825214 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:37 crc kubenswrapper[5028]: E1123 06:52:37.825402 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.325368929 +0000 UTC m=+142.022773708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825507 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301827d-cd7a-4382-a397-51e3e115e834-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825570 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301827d-cd7a-4382-a397-51e3e115e834-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825603 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-serving-cert\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825627 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-proxy-tls\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825653 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-config\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825678 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825745 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwt6\" (UniqueName: \"kubernetes.io/projected/483a94d2-3437-4165-a8c3-6a014b2dcea4-kube-api-access-pjwt6\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825794 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab3473d-0543-4160-8ad4-f262ec89e82b-secret-volume\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.827633 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828143 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-config\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828324 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5301827d-cd7a-4382-a397-51e3e115e834-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.825823 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-serving-cert\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828584 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vthc\" (UniqueName: \"kubernetes.io/projected/abfae71c-faa4-4d70-989c-7a248d6730e0-kube-api-access-7vthc\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828652 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz9lf\" (UniqueName: \"kubernetes.io/projected/3baecb2d-3513-4920-a11b-18947bda4669-kube-api-access-lz9lf\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828690 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828716 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfkk\" (UniqueName: \"kubernetes.io/projected/21e76c25-ba6e-439a-8e9e-6650b7bda321-kube-api-access-ldfkk\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828756 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4766e1-6147-47be-8a4a-bb52d8370962-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-57wj4\" (UID: \"ad4766e1-6147-47be-8a4a-bb52d8370962\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828796 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zcm4\" (UniqueName: \"kubernetes.io/projected/48e2ebf7-77fd-43a3-9e8b-d89458a00707-kube-api-access-7zcm4\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828835 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aeff5e8e-1eb5-49cd-b72c-b01283e487ca-cert\") pod \"ingress-canary-dgqrn\" (UID: \"aeff5e8e-1eb5-49cd-b72c-b01283e487ca\") " pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828875 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mcbp\" (UniqueName: \"kubernetes.io/projected/e193cc0a-93bc-4f63-8b56-255209ee7c66-kube-api-access-6mcbp\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828902 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21e76c25-ba6e-439a-8e9e-6650b7bda321-trusted-ca\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828933 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.828974 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.829007 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e34e5a9-95a3-43d5-8c81-ed837e907109-audit-dir\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.829040 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.829830 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-service-ca\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830316 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b45d44a5-1077-40b8-8faf-0b206cdac95b-serving-cert\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830393 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ad0fd40-348f-46f6-87f8-001fc9918495-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830473 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-csi-data-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830552 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830653 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-config\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830741 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-image-import-ca\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830845 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da08adf-859a-4df3-84d6-842f8652b8c5-service-ca-bundle\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.830934 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831085 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4z7b\" (UniqueName: \"kubernetes.io/projected/ad4766e1-6147-47be-8a4a-bb52d8370962-kube-api-access-c4z7b\") pod \"multus-admission-controller-857f4d67dd-57wj4\" (UID: \"ad4766e1-6147-47be-8a4a-bb52d8370962\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831179 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-srv-cert\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831259 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab3473d-0543-4160-8ad4-f262ec89e82b-config-volume\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831343 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-client-ca\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831381 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-etcd-client\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831444 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831472 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjnxf\" (UniqueName: \"kubernetes.io/projected/851ad942-5cf5-45a1-8b47-3c4adbfaef4a-kube-api-access-tjnxf\") pod \"package-server-manager-789f6589d5-bt4rc\" (UID: \"851ad942-5cf5-45a1-8b47-3c4adbfaef4a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831550 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llhfw\" (UniqueName: \"kubernetes.io/projected/e3c4cf13-f6af-4121-9feb-653a6abd396a-kube-api-access-llhfw\") pod \"downloads-7954f5f757-m4qn7\" (UID: \"e3c4cf13-f6af-4121-9feb-653a6abd396a\") " pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831597 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a06ae06-c671-4329-885f-930b0847abac-proxy-tls\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831645 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-serving-cert\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831702 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppwf\" (UniqueName: \"kubernetes.io/projected/5301827d-cd7a-4382-a397-51e3e115e834-kube-api-access-fppwf\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831708 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ad0fd40-348f-46f6-87f8-001fc9918495-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831741 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-etcd-serving-ca\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831792 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-audit\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831833 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ba6016-7bd8-4ee0-9dd9-111f320e064f-serving-cert\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831878 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-ca\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.831925 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21e76c25-ba6e-439a-8e9e-6650b7bda321-metrics-tls\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832016 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-etcd-client\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832066 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832106 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832157 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-bound-sa-token\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832207 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-policies\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832254 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca22d3a-be54-4425-ba62-490d86d77e02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-socket-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832334 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-mountpoint-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832390 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-service-ca-bundle\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832431 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-registration-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832474 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz86z\" (UniqueName: \"kubernetes.io/projected/fab3473d-0543-4160-8ad4-f262ec89e82b-kube-api-access-qz86z\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832482 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-client-ca\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832514 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a04f0f8-5007-43d3-907b-d35f7e68b40f-profile-collector-cert\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832565 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/483a94d2-3437-4165-a8c3-6a014b2dcea4-serving-cert\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832611 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gjc\" (UniqueName: \"kubernetes.io/projected/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-kube-api-access-g7gjc\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832658 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f8d8348-6865-401f-aa66-63404d4a2869-auth-proxy-config\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832714 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832753 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td2p4\" (UniqueName: \"kubernetes.io/projected/42ba6016-7bd8-4ee0-9dd9-111f320e064f-kube-api-access-td2p4\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832796 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832841 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vh7\" (UniqueName: \"kubernetes.io/projected/6e34e5a9-95a3-43d5-8c81-ed837e907109-kube-api-access-79vh7\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832882 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-config\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvwb\" (UniqueName: \"kubernetes.io/projected/3da08adf-859a-4df3-84d6-842f8652b8c5-kube-api-access-svvwb\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.832984 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833055 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5np\" (UniqueName: \"kubernetes.io/projected/4a04f0f8-5007-43d3-907b-d35f7e68b40f-kube-api-access-nn5np\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833117 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833159 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/271c7a5f-b0ff-458f-804e-34922261cb06-certs\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833194 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833237 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1cfb9270-8022-420c-b9e3-ecdfed90d6bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qlj5c\" (UID: \"1cfb9270-8022-420c-b9e3-ecdfed90d6bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833279 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca22d3a-be54-4425-ba62-490d86d77e02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833322 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-config\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833360 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-client-ca\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833400 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-service-ca\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833448 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3baecb2d-3513-4920-a11b-18947bda4669-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833491 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8d8348-6865-401f-aa66-63404d4a2869-config\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833531 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64fac0c1-4e23-48c0-a162-f77370e3497e-node-pullsecrets\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833592 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833634 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-config\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833636 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f82cfa58-77b7-450f-b554-1db8ad48b250-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833695 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62llp\" (UniqueName: \"kubernetes.io/projected/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-kube-api-access-62llp\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833729 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pls2s\" (UniqueName: \"kubernetes.io/projected/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-kube-api-access-pls2s\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833784 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-serving-cert\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833815 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-oauth-config\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833855 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-stats-auth\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833884 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-default-certificate\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833916 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abfae71c-faa4-4d70-989c-7a248d6730e0-metrics-tls\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833966 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcj7n\" (UniqueName: \"kubernetes.io/projected/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-kube-api-access-kcj7n\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.833998 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e193cc0a-93bc-4f63-8b56-255209ee7c66-signing-cabundle\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834051 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-certificates\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834082 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqrsz\" (UniqueName: \"kubernetes.io/projected/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-kube-api-access-fqrsz\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834115 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/857ed9f1-ee3f-4d84-8945-71b7211bcf02-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834146 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-encryption-config\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834171 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-metrics-certs\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834204 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276hs\" (UniqueName: \"kubernetes.io/projected/94d1e6cc-d93d-4c83-82f3-3e84551beace-kube-api-access-276hs\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834258 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-config\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834284 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc33ea4b-0b9c-4f25-8a84-337153c350c1-config\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834316 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w5cr\" (UniqueName: \"kubernetes.io/projected/aeff5e8e-1eb5-49cd-b72c-b01283e487ca-kube-api-access-7w5cr\") pod \"ingress-canary-dgqrn\" (UID: \"aeff5e8e-1eb5-49cd-b72c-b01283e487ca\") " pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834344 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-oauth-serving-cert\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834375 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21e76c25-ba6e-439a-8e9e-6650b7bda321-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834400 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-webhook-cert\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834434 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-tmpfs\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834468 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk94r\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-kube-api-access-kk94r\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834500 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcs8j\" (UniqueName: \"kubernetes.io/projected/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-kube-api-access-tcs8j\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834530 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-config\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834558 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct8rg\" (UniqueName: \"kubernetes.io/projected/3a06ae06-c671-4329-885f-930b0847abac-kube-api-access-ct8rg\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834608 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834638 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64fac0c1-4e23-48c0-a162-f77370e3497e-audit-dir\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834674 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834708 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-trusted-ca-bundle\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834744 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834777 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-audit-policies\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834805 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-config\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834843 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-tls\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834877 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-client\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834961 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-plugins-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.834997 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px4hm\" (UniqueName: \"kubernetes.io/projected/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-kube-api-access-px4hm\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835035 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdpgh\" (UniqueName: \"kubernetes.io/projected/b45d44a5-1077-40b8-8faf-0b206cdac95b-kube-api-access-pdpgh\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835068 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw2dk\" (UniqueName: \"kubernetes.io/projected/9f8d8348-6865-401f-aa66-63404d4a2869-kube-api-access-gw2dk\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835097 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-config\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835129 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-dir\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835139 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835160 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnbzn\" (UniqueName: \"kubernetes.io/projected/271c7a5f-b0ff-458f-804e-34922261cb06-kube-api-access-bnbzn\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835194 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbkvb\" (UniqueName: \"kubernetes.io/projected/2220b96b-60ef-46c9-850e-4e7a38727019-kube-api-access-nbkvb\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835230 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2220b96b-60ef-46c9-850e-4e7a38727019-serving-cert\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835258 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-serving-cert\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835287 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/271c7a5f-b0ff-458f-804e-34922261cb06-node-bootstrap-token\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835318 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8292t\" (UniqueName: \"kubernetes.io/projected/0d14055c-94d0-4e92-a453-05c96dd4387b-kube-api-access-8292t\") pod \"migrator-59844c95c7-xndhh\" (UID: \"0d14055c-94d0-4e92-a453-05c96dd4387b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835349 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce1e72c5-9d4f-47ff-805d-921034752820-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gc5xr\" (UID: \"ce1e72c5-9d4f-47ff-805d-921034752820\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.835380 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgbmf\" (UniqueName: \"kubernetes.io/projected/ce1e72c5-9d4f-47ff-805d-921034752820-kube-api-access-fgbmf\") pod \"control-plane-machine-set-operator-78cbb6b69f-gc5xr\" (UID: \"ce1e72c5-9d4f-47ff-805d-921034752820\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.836080 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.836234 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-etcd-serving-ca\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.837264 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-audit\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.837686 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-etcd-client\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.838196 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-serving-cert\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.838929 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b45d44a5-1077-40b8-8faf-0b206cdac95b-serving-cert\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.839621 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.839987 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/857ed9f1-ee3f-4d84-8945-71b7211bcf02-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.840049 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/64fac0c1-4e23-48c0-a162-f77370e3497e-audit-dir\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: E1123 06:52:37.840695 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.340678746 +0000 UTC m=+142.038083525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.840799 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ba6016-7bd8-4ee0-9dd9-111f320e064f-serving-cert\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.841088 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9f8d8348-6865-401f-aa66-63404d4a2869-auth-proxy-config\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.841433 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-image-import-ca\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.841733 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.842129 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3da08adf-859a-4df3-84d6-842f8652b8c5-service-ca-bundle\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.842821 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-encryption-config\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.842986 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bdd8\" (UniqueName: \"kubernetes.io/projected/64fac0c1-4e23-48c0-a162-f77370e3497e-kube-api-access-2bdd8\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843072 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-config\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843084 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33ea4b-0b9c-4f25-8a84-337153c350c1-serving-cert\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843207 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-apiservice-cert\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843318 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-trusted-ca\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843424 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a06ae06-c671-4329-885f-930b0847abac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843564 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9f8d8348-6865-401f-aa66-63404d4a2869-machine-approver-tls\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843623 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abfae71c-faa4-4d70-989c-7a248d6730e0-config-volume\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843675 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e193cc0a-93bc-4f63-8b56-255209ee7c66-signing-key\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843758 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/857ed9f1-ee3f-4d84-8945-71b7211bcf02-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843812 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc33ea4b-0b9c-4f25-8a84-337153c350c1-trusted-ca\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843862 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ad0fd40-348f-46f6-87f8-001fc9918495-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.843907 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhff9\" (UniqueName: \"kubernetes.io/projected/857ed9f1-ee3f-4d84-8945-71b7211bcf02-kube-api-access-fhff9\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.844005 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.844055 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2220b96b-60ef-46c9-850e-4e7a38727019-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.845095 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/64fac0c1-4e23-48c0-a162-f77370e3497e-node-pullsecrets\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.845238 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/857ed9f1-ee3f-4d84-8945-71b7211bcf02-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.845882 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-audit-policies\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.846301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/483a94d2-3437-4165-a8c3-6a014b2dcea4-serving-cert\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.846802 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-metrics-certs\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.847056 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-client-ca\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.847106 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-etcd-client\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.847348 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.847376 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.847609 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.848130 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-config\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.848513 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f8d8348-6865-401f-aa66-63404d4a2869-config\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849035 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-service-ca\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849476 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-service-ca-bundle\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849692 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3baecb2d-3513-4920-a11b-18947bda4669-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849739 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a04f0f8-5007-43d3-907b-d35f7e68b40f-srv-cert\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849815 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f82cfa58-77b7-450f-b554-1db8ad48b250-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849878 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfhlm\" (UniqueName: \"kubernetes.io/projected/dc33ea4b-0b9c-4f25-8a84-337153c350c1-kube-api-access-xfhlm\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849890 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9f8d8348-6865-401f-aa66-63404d4a2869-machine-approver-tls\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.849985 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850007 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-certificates\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850011 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-config\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850714 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-dir\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850743 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-ca\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850771 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3baecb2d-3513-4920-a11b-18947bda4669-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850863 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.850981 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851121 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64fac0c1-4e23-48c0-a162-f77370e3497e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851424 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-config\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851522 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/851ad942-5cf5-45a1-8b47-3c4adbfaef4a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bt4rc\" (UID: \"851ad942-5cf5-45a1-8b47-3c4adbfaef4a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851561 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jqlf\" (UniqueName: \"kubernetes.io/projected/f82cfa58-77b7-450f-b554-1db8ad48b250-kube-api-access-9jqlf\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851611 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851637 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-images\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.851823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-tls\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852098 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca22d3a-be54-4425-ba62-490d86d77e02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852136 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce1d92fe-ce3d-46df-8c5a-1896d7115fdd-metrics-tls\") pod \"dns-operator-744455d44c-nvfz9\" (UID: \"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852165 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6g8\" (UniqueName: \"kubernetes.io/projected/ce1d92fe-ce3d-46df-8c5a-1896d7115fdd-kube-api-access-7b6g8\") pod \"dns-operator-744455d44c-nvfz9\" (UID: \"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852229 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khjk9\" (UniqueName: \"kubernetes.io/projected/1cfb9270-8022-420c-b9e3-ecdfed90d6bf-kube-api-access-khjk9\") pod \"cluster-samples-operator-665b6dd947-qlj5c\" (UID: \"1cfb9270-8022-420c-b9e3-ecdfed90d6bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3baecb2d-3513-4920-a11b-18947bda4669-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852900 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b45d44a5-1077-40b8-8faf-0b206cdac95b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.852916 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.853245 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/64fac0c1-4e23-48c0-a162-f77370e3497e-encryption-config\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.853653 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.853679 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3baecb2d-3513-4920-a11b-18947bda4669-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.855182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-config\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.855383 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-policies\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.856549 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-trusted-ca\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.857591 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.857750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-stats-auth\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.857783 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3da08adf-859a-4df3-84d6-842f8652b8c5-default-certificate\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.857907 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-etcd-client\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.858345 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1cfb9270-8022-420c-b9e3-ecdfed90d6bf-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qlj5c\" (UID: \"1cfb9270-8022-420c-b9e3-ecdfed90d6bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.859323 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5301827d-cd7a-4382-a397-51e3e115e834-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.859344 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce1d92fe-ce3d-46df-8c5a-1896d7115fdd-metrics-tls\") pod \"dns-operator-744455d44c-nvfz9\" (UID: \"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.860415 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.860893 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-serving-cert\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.863716 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwt6\" (UniqueName: \"kubernetes.io/projected/483a94d2-3437-4165-a8c3-6a014b2dcea4-kube-api-access-pjwt6\") pod \"controller-manager-879f6c89f-tcdk5\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.867380 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ad0fd40-348f-46f6-87f8-001fc9918495-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.894925 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz9lf\" (UniqueName: \"kubernetes.io/projected/3baecb2d-3513-4920-a11b-18947bda4669-kube-api-access-lz9lf\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.907996 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zcm4\" (UniqueName: \"kubernetes.io/projected/48e2ebf7-77fd-43a3-9e8b-d89458a00707-kube-api-access-7zcm4\") pod \"oauth-openshift-558db77b4-wfgw7\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.928331 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llhfw\" (UniqueName: \"kubernetes.io/projected/e3c4cf13-f6af-4121-9feb-653a6abd396a-kube-api-access-llhfw\") pod \"downloads-7954f5f757-m4qn7\" (UID: \"e3c4cf13-f6af-4121-9feb-653a6abd396a\") " pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.947407 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppwf\" (UniqueName: \"kubernetes.io/projected/5301827d-cd7a-4382-a397-51e3e115e834-kube-api-access-fppwf\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9rrf\" (UID: \"5301827d-cd7a-4382-a397-51e3e115e834\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.953673 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.953916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-config\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.953962 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f82cfa58-77b7-450f-b554-1db8ad48b250-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.953991 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.954013 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-serving-cert\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: E1123 06:52:37.954484 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.454118493 +0000 UTC m=+142.151523312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955050 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-oauth-config\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955239 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62llp\" (UniqueName: \"kubernetes.io/projected/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-kube-api-access-62llp\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955331 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pls2s\" (UniqueName: \"kubernetes.io/projected/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-kube-api-access-pls2s\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955093 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f82cfa58-77b7-450f-b554-1db8ad48b250-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955484 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-config\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955637 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abfae71c-faa4-4d70-989c-7a248d6730e0-metrics-tls\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955840 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcj7n\" (UniqueName: \"kubernetes.io/projected/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-kube-api-access-kcj7n\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.955921 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e193cc0a-93bc-4f63-8b56-255209ee7c66-signing-cabundle\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956048 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqrsz\" (UniqueName: \"kubernetes.io/projected/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-kube-api-access-fqrsz\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956244 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc33ea4b-0b9c-4f25-8a84-337153c350c1-config\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956496 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276hs\" (UniqueName: \"kubernetes.io/projected/94d1e6cc-d93d-4c83-82f3-3e84551beace-kube-api-access-276hs\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w5cr\" (UniqueName: \"kubernetes.io/projected/aeff5e8e-1eb5-49cd-b72c-b01283e487ca-kube-api-access-7w5cr\") pod \"ingress-canary-dgqrn\" (UID: \"aeff5e8e-1eb5-49cd-b72c-b01283e487ca\") " pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956736 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-oauth-serving-cert\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956807 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21e76c25-ba6e-439a-8e9e-6650b7bda321-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956851 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-webhook-cert\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956890 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-config\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956923 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct8rg\" (UniqueName: \"kubernetes.io/projected/3a06ae06-c671-4329-885f-930b0847abac-kube-api-access-ct8rg\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.956983 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-tmpfs\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957044 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e193cc0a-93bc-4f63-8b56-255209ee7c66-signing-cabundle\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957086 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957129 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-trusted-ca-bundle\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957176 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-config\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957224 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-plugins-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957312 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px4hm\" (UniqueName: \"kubernetes.io/projected/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-kube-api-access-px4hm\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957367 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnbzn\" (UniqueName: \"kubernetes.io/projected/271c7a5f-b0ff-458f-804e-34922261cb06-kube-api-access-bnbzn\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957426 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbkvb\" (UniqueName: \"kubernetes.io/projected/2220b96b-60ef-46c9-850e-4e7a38727019-kube-api-access-nbkvb\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2220b96b-60ef-46c9-850e-4e7a38727019-serving-cert\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957592 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8292t\" (UniqueName: \"kubernetes.io/projected/0d14055c-94d0-4e92-a453-05c96dd4387b-kube-api-access-8292t\") pod \"migrator-59844c95c7-xndhh\" (UID: \"0d14055c-94d0-4e92-a453-05c96dd4387b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957655 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce1e72c5-9d4f-47ff-805d-921034752820-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gc5xr\" (UID: \"ce1e72c5-9d4f-47ff-805d-921034752820\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957700 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-tmpfs\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957707 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/271c7a5f-b0ff-458f-804e-34922261cb06-node-bootstrap-token\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957761 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-encryption-config\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957773 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-config\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957793 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgbmf\" (UniqueName: \"kubernetes.io/projected/ce1e72c5-9d4f-47ff-805d-921034752820-kube-api-access-fgbmf\") pod \"control-plane-machine-set-operator-78cbb6b69f-gc5xr\" (UID: \"ce1e72c5-9d4f-47ff-805d-921034752820\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957834 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33ea4b-0b9c-4f25-8a84-337153c350c1-serving-cert\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957838 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-plugins-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957862 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-apiservice-cert\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957888 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a06ae06-c671-4329-885f-930b0847abac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957915 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abfae71c-faa4-4d70-989c-7a248d6730e0-config-volume\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957936 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e193cc0a-93bc-4f63-8b56-255209ee7c66-signing-key\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957983 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc33ea4b-0b9c-4f25-8a84-337153c350c1-trusted-ca\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957984 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc33ea4b-0b9c-4f25-8a84-337153c350c1-config\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958020 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2220b96b-60ef-46c9-850e-4e7a38727019-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958055 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a04f0f8-5007-43d3-907b-d35f7e68b40f-srv-cert\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f82cfa58-77b7-450f-b554-1db8ad48b250-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958113 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfhlm\" (UniqueName: \"kubernetes.io/projected/dc33ea4b-0b9c-4f25-8a84-337153c350c1-kube-api-access-xfhlm\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958155 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/851ad942-5cf5-45a1-8b47-3c4adbfaef4a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bt4rc\" (UID: \"851ad942-5cf5-45a1-8b47-3c4adbfaef4a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958181 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jqlf\" (UniqueName: \"kubernetes.io/projected/f82cfa58-77b7-450f-b554-1db8ad48b250-kube-api-access-9jqlf\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958206 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-images\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958259 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca22d3a-be54-4425-ba62-490d86d77e02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958288 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-serving-cert\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958311 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-proxy-tls\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958334 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vthc\" (UniqueName: \"kubernetes.io/projected/abfae71c-faa4-4d70-989c-7a248d6730e0-kube-api-access-7vthc\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958356 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab3473d-0543-4160-8ad4-f262ec89e82b-secret-volume\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958381 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfkk\" (UniqueName: \"kubernetes.io/projected/21e76c25-ba6e-439a-8e9e-6650b7bda321-kube-api-access-ldfkk\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958403 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mcbp\" (UniqueName: \"kubernetes.io/projected/e193cc0a-93bc-4f63-8b56-255209ee7c66-kube-api-access-6mcbp\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958426 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21e76c25-ba6e-439a-8e9e-6650b7bda321-trusted-ca\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4766e1-6147-47be-8a4a-bb52d8370962-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-57wj4\" (UID: \"ad4766e1-6147-47be-8a4a-bb52d8370962\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958473 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aeff5e8e-1eb5-49cd-b72c-b01283e487ca-cert\") pod \"ingress-canary-dgqrn\" (UID: \"aeff5e8e-1eb5-49cd-b72c-b01283e487ca\") " pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958494 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958518 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e34e5a9-95a3-43d5-8c81-ed837e907109-audit-dir\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958540 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958563 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-service-ca\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958590 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-csi-data-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958621 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958644 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-srv-cert\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958664 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab3473d-0543-4160-8ad4-f262ec89e82b-config-volume\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958685 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4z7b\" (UniqueName: \"kubernetes.io/projected/ad4766e1-6147-47be-8a4a-bb52d8370962-kube-api-access-c4z7b\") pod \"multus-admission-controller-857f4d67dd-57wj4\" (UID: \"ad4766e1-6147-47be-8a4a-bb52d8370962\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958715 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958738 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjnxf\" (UniqueName: \"kubernetes.io/projected/851ad942-5cf5-45a1-8b47-3c4adbfaef4a-kube-api-access-tjnxf\") pod \"package-server-manager-789f6589d5-bt4rc\" (UID: \"851ad942-5cf5-45a1-8b47-3c4adbfaef4a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958784 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a06ae06-c671-4329-885f-930b0847abac-proxy-tls\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958807 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-serving-cert\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21e76c25-ba6e-439a-8e9e-6650b7bda321-metrics-tls\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958879 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958913 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca22d3a-be54-4425-ba62-490d86d77e02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958935 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-socket-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.958991 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-mountpoint-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959025 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-registration-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959057 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz86z\" (UniqueName: \"kubernetes.io/projected/fab3473d-0543-4160-8ad4-f262ec89e82b-kube-api-access-qz86z\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959088 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7gjc\" (UniqueName: \"kubernetes.io/projected/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-kube-api-access-g7gjc\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959111 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a04f0f8-5007-43d3-907b-d35f7e68b40f-profile-collector-cert\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959143 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959164 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a06ae06-c671-4329-885f-930b0847abac-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959174 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vh7\" (UniqueName: \"kubernetes.io/projected/6e34e5a9-95a3-43d5-8c81-ed837e907109-kube-api-access-79vh7\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959213 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959246 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5np\" (UniqueName: \"kubernetes.io/projected/4a04f0f8-5007-43d3-907b-d35f7e68b40f-kube-api-access-nn5np\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959278 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/271c7a5f-b0ff-458f-804e-34922261cb06-certs\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959310 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca22d3a-be54-4425-ba62-490d86d77e02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959850 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abfae71c-faa4-4d70-989c-7a248d6730e0-config-volume\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.959942 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-oauth-config\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.960009 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2220b96b-60ef-46c9-850e-4e7a38727019-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.960829 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-webhook-cert\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.961242 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2220b96b-60ef-46c9-850e-4e7a38727019-serving-cert\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.961806 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc33ea4b-0b9c-4f25-8a84-337153c350c1-trusted-ca\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.961825 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-trusted-ca-bundle\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.962354 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-serving-cert\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.962620 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-mountpoint-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.962800 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-registration-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.962959 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab3473d-0543-4160-8ad4-f262ec89e82b-config-volume\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.963101 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-socket-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.963445 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a04f0f8-5007-43d3-907b-d35f7e68b40f-srv-cert\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.957600 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-oauth-serving-cert\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.963718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/21e76c25-ba6e-439a-8e9e-6650b7bda321-trusted-ca\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.963728 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/851ad942-5cf5-45a1-8b47-3c4adbfaef4a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bt4rc\" (UID: \"851ad942-5cf5-45a1-8b47-3c4adbfaef4a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.964144 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc33ea4b-0b9c-4f25-8a84-337153c350c1-serving-cert\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.964595 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.964683 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e193cc0a-93bc-4f63-8b56-255209ee7c66-signing-key\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.964746 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e34e5a9-95a3-43d5-8c81-ed837e907109-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.964897 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-config\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: E1123 06:52:37.965074 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.465027762 +0000 UTC m=+142.162432551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.965490 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-images\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.965562 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.965990 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-encryption-config\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.966258 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-service-ca\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.967152 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e34e5a9-95a3-43d5-8c81-ed837e907109-audit-dir\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.967374 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.967407 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94d1e6cc-d93d-4c83-82f3-3e84551beace-csi-data-dir\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.968166 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ca22d3a-be54-4425-ba62-490d86d77e02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.968473 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-serving-cert\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.968697 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-apiservice-cert\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.969641 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-proxy-tls\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.969721 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce1e72c5-9d4f-47ff-805d-921034752820-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gc5xr\" (UID: \"ce1e72c5-9d4f-47ff-805d-921034752820\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.969852 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/21e76c25-ba6e-439a-8e9e-6650b7bda321-metrics-tls\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.969905 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.970706 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/abfae71c-faa4-4d70-989c-7a248d6730e0-metrics-tls\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.970784 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab3473d-0543-4160-8ad4-f262ec89e82b-secret-volume\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.971081 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.971288 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad4766e1-6147-47be-8a4a-bb52d8370962-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-57wj4\" (UID: \"ad4766e1-6147-47be-8a4a-bb52d8370962\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.971482 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.971495 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ca22d3a-be54-4425-ba62-490d86d77e02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.971584 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f82cfa58-77b7-450f-b554-1db8ad48b250-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.972782 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcs8j\" (UniqueName: \"kubernetes.io/projected/bfc84a0c-b65b-4a10-b0f2-9811c348bda2-kube-api-access-tcs8j\") pod \"etcd-operator-b45778765-hb2pv\" (UID: \"bfc84a0c-b65b-4a10-b0f2-9811c348bda2\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.973747 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/271c7a5f-b0ff-458f-804e-34922261cb06-certs\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.973813 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aeff5e8e-1eb5-49cd-b72c-b01283e487ca-cert\") pod \"ingress-canary-dgqrn\" (UID: \"aeff5e8e-1eb5-49cd-b72c-b01283e487ca\") " pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.974052 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a06ae06-c671-4329-885f-930b0847abac-proxy-tls\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.974380 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-srv-cert\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.975251 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a04f0f8-5007-43d3-907b-d35f7e68b40f-profile-collector-cert\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.976355 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/271c7a5f-b0ff-458f-804e-34922261cb06-node-bootstrap-token\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.976362 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e34e5a9-95a3-43d5-8c81-ed837e907109-serving-cert\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:37 crc kubenswrapper[5028]: I1123 06:52:37.989272 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk94r\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-kube-api-access-kk94r\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.019752 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td2p4\" (UniqueName: \"kubernetes.io/projected/42ba6016-7bd8-4ee0-9dd9-111f320e064f-kube-api-access-td2p4\") pod \"route-controller-manager-6576b87f9c-jjtp4\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.029416 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67f4ce1c-e4ff-4127-a3c9-a904623caeb9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ng58c\" (UID: \"67f4ce1c-e4ff-4127-a3c9-a904623caeb9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.038293 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.047596 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.053673 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvwb\" (UniqueName: \"kubernetes.io/projected/3da08adf-859a-4df3-84d6-842f8652b8c5-kube-api-access-svvwb\") pod \"router-default-5444994796-vm29q\" (UID: \"3da08adf-859a-4df3-84d6-842f8652b8c5\") " pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.060331 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.060820 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.560806714 +0000 UTC m=+142.258211493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.071944 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw2dk\" (UniqueName: \"kubernetes.io/projected/9f8d8348-6865-401f-aa66-63404d4a2869-kube-api-access-gw2dk\") pod \"machine-approver-56656f9798-kkknq\" (UID: \"9f8d8348-6865-401f-aa66-63404d4a2869\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.093646 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-bound-sa-token\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.111002 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.116835 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhff9\" (UniqueName: \"kubernetes.io/projected/857ed9f1-ee3f-4d84-8945-71b7211bcf02-kube-api-access-fhff9\") pod \"openshift-apiserver-operator-796bbdcf4f-2n5wq\" (UID: \"857ed9f1-ee3f-4d84-8945-71b7211bcf02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.135909 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bdd8\" (UniqueName: \"kubernetes.io/projected/64fac0c1-4e23-48c0-a162-f77370e3497e-kube-api-access-2bdd8\") pod \"apiserver-76f77b778f-7rpm6\" (UID: \"64fac0c1-4e23-48c0-a162-f77370e3497e\") " pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.158467 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdpgh\" (UniqueName: \"kubernetes.io/projected/b45d44a5-1077-40b8-8faf-0b206cdac95b-kube-api-access-pdpgh\") pod \"authentication-operator-69f744f599-q44fq\" (UID: \"b45d44a5-1077-40b8-8faf-0b206cdac95b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.163465 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.163812 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.663796944 +0000 UTC m=+142.361201723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.168301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3baecb2d-3513-4920-a11b-18947bda4669-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-blqlb\" (UID: \"3baecb2d-3513-4920-a11b-18947bda4669\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.169114 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.193847 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6g8\" (UniqueName: \"kubernetes.io/projected/ce1d92fe-ce3d-46df-8c5a-1896d7115fdd-kube-api-access-7b6g8\") pod \"dns-operator-744455d44c-nvfz9\" (UID: \"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.200547 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.209548 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khjk9\" (UniqueName: \"kubernetes.io/projected/1cfb9270-8022-420c-b9e3-ecdfed90d6bf-kube-api-access-khjk9\") pod \"cluster-samples-operator-665b6dd947-qlj5c\" (UID: \"1cfb9270-8022-420c-b9e3-ecdfed90d6bf\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.243492 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.243523 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.247551 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.248722 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d4db5b1-ec96-442c-a106-3bdcddcaa3b5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jnb2v\" (UID: \"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.258329 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.264885 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.265139 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.765099452 +0000 UTC m=+142.462504231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.265206 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.265275 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.265906 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.765891591 +0000 UTC m=+142.463296370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.273838 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62llp\" (UniqueName: \"kubernetes.io/projected/b9eec920-6d0b-4064-8dd8-44b2f4fe722a-kube-api-access-62llp\") pod \"machine-config-operator-74547568cd-rghr8\" (UID: \"b9eec920-6d0b-4064-8dd8-44b2f4fe722a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.274091 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5301827d_cd7a_4382_a397_51e3e115e834.slice/crio-2d49a90efbc067b7ed5dff621f1654ff80ae46030594079ed05e037310dad873 WatchSource:0}: Error finding container 2d49a90efbc067b7ed5dff621f1654ff80ae46030594079ed05e037310dad873: Status 404 returned error can't find the container with id 2d49a90efbc067b7ed5dff621f1654ff80ae46030594079ed05e037310dad873 Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.274455 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.279261 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da08adf_859a_4df3_84d6_842f8652b8c5.slice/crio-6442be49175a822e55587a457a157e36b1b4564ba0128c94e273f32e41b39e62 WatchSource:0}: Error finding container 6442be49175a822e55587a457a157e36b1b4564ba0128c94e273f32e41b39e62: Status 404 returned error can't find the container with id 6442be49175a822e55587a457a157e36b1b4564ba0128c94e273f32e41b39e62 Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.280551 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.295114 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pls2s\" (UniqueName: \"kubernetes.io/projected/fcf9070d-93b3-468e-93c2-094b7fbe5a6b-kube-api-access-pls2s\") pod \"packageserver-d55dfcdfc-8cd2d\" (UID: \"fcf9070d-93b3-468e-93c2-094b7fbe5a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.308180 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.312802 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcj7n\" (UniqueName: \"kubernetes.io/projected/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-kube-api-access-kcj7n\") pod \"console-f9d7485db-pppdd\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.329026 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqrsz\" (UniqueName: \"kubernetes.io/projected/3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160-kube-api-access-fqrsz\") pod \"service-ca-operator-777779d784-tbbmb\" (UID: \"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.329322 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.338258 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.351486 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276hs\" (UniqueName: \"kubernetes.io/projected/94d1e6cc-d93d-4c83-82f3-3e84551beace-kube-api-access-276hs\") pod \"csi-hostpathplugin-jsktx\" (UID: \"94d1e6cc-d93d-4c83-82f3-3e84551beace\") " pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.357993 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.366631 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.367392 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.867308742 +0000 UTC m=+142.564713511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.369038 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w5cr\" (UniqueName: \"kubernetes.io/projected/aeff5e8e-1eb5-49cd-b72c-b01283e487ca-kube-api-access-7w5cr\") pod \"ingress-canary-dgqrn\" (UID: \"aeff5e8e-1eb5-49cd-b72c-b01283e487ca\") " pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.378548 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tcdk5"] Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.387183 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f8d8348_6865_401f_aa66_63404d4a2869.slice/crio-e093a54258f449b821302187aa6eb3afa485e1c9e996d598cdf9befc5d8db2c5 WatchSource:0}: Error finding container e093a54258f449b821302187aa6eb3afa485e1c9e996d598cdf9befc5d8db2c5: Status 404 returned error can't find the container with id e093a54258f449b821302187aa6eb3afa485e1c9e996d598cdf9befc5d8db2c5 Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.389629 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.390463 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/21e76c25-ba6e-439a-8e9e-6650b7bda321-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.412150 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct8rg\" (UniqueName: \"kubernetes.io/projected/3a06ae06-c671-4329-885f-930b0847abac-kube-api-access-ct8rg\") pod \"machine-config-controller-84d6567774-rkqnx\" (UID: \"3a06ae06-c671-4329-885f-930b0847abac\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.414522 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m4qn7"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.422636 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.433782 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.444731 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnbzn\" (UniqueName: \"kubernetes.io/projected/271c7a5f-b0ff-458f-804e-34922261cb06-kube-api-access-bnbzn\") pod \"machine-config-server-j9tbl\" (UID: \"271c7a5f-b0ff-458f-804e-34922261cb06\") " pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.447851 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wfgw7"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.452393 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.459145 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px4hm\" (UniqueName: \"kubernetes.io/projected/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-kube-api-access-px4hm\") pod \"marketplace-operator-79b997595-fzg2c\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.463789 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.467225 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3c4cf13_f6af_4121_9feb_653a6abd396a.slice/crio-ed1a6967d8f896f763065d5dda988c2bdd3128a633ded0856aa5db6befb2fdb8 WatchSource:0}: Error finding container ed1a6967d8f896f763065d5dda988c2bdd3128a633ded0856aa5db6befb2fdb8: Status 404 returned error can't find the container with id ed1a6967d8f896f763065d5dda988c2bdd3128a633ded0856aa5db6befb2fdb8 Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.468390 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.468736 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:38.968717803 +0000 UTC m=+142.666122582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.475832 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.476823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgbmf\" (UniqueName: \"kubernetes.io/projected/ce1e72c5-9d4f-47ff-805d-921034752820-kube-api-access-fgbmf\") pod \"control-plane-machine-set-operator-78cbb6b69f-gc5xr\" (UID: \"ce1e72c5-9d4f-47ff-805d-921034752820\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.498180 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-j9tbl" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.509826 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfhlm\" (UniqueName: \"kubernetes.io/projected/dc33ea4b-0b9c-4f25-8a84-337153c350c1-kube-api-access-xfhlm\") pod \"console-operator-58897d9998-hbwhl\" (UID: \"dc33ea4b-0b9c-4f25-8a84-337153c350c1\") " pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.519015 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.522383 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8292t\" (UniqueName: \"kubernetes.io/projected/0d14055c-94d0-4e92-a453-05c96dd4387b-kube-api-access-8292t\") pod \"migrator-59844c95c7-xndhh\" (UID: \"0d14055c-94d0-4e92-a453-05c96dd4387b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.522772 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.528928 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-dgqrn" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.539531 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbkvb\" (UniqueName: \"kubernetes.io/projected/2220b96b-60ef-46c9-850e-4e7a38727019-kube-api-access-nbkvb\") pod \"openshift-config-operator-7777fb866f-rpcj5\" (UID: \"2220b96b-60ef-46c9-850e-4e7a38727019\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.558575 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca22d3a-be54-4425-ba62-490d86d77e02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmtfk\" (UID: \"2ca22d3a-be54-4425-ba62-490d86d77e02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.572522 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.573054 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.073032895 +0000 UTC m=+142.770437674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.588722 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vh7\" (UniqueName: \"kubernetes.io/projected/6e34e5a9-95a3-43d5-8c81-ed837e907109-kube-api-access-79vh7\") pod \"apiserver-7bbb656c7d-z6mq2\" (UID: \"6e34e5a9-95a3-43d5-8c81-ed837e907109\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.605334 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nvfz9"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.608265 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hb2pv"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.615694 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz86z\" (UniqueName: \"kubernetes.io/projected/fab3473d-0543-4160-8ad4-f262ec89e82b-kube-api-access-qz86z\") pod \"collect-profiles-29398005-6sblh\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.635874 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4z7b\" (UniqueName: \"kubernetes.io/projected/ad4766e1-6147-47be-8a4a-bb52d8370962-kube-api-access-c4z7b\") pod \"multus-admission-controller-857f4d67dd-57wj4\" (UID: \"ad4766e1-6147-47be-8a4a-bb52d8370962\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.638212 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7gjc\" (UniqueName: \"kubernetes.io/projected/a0bec766-1a35-4ba4-b2f0-5f624982b6dd-kube-api-access-g7gjc\") pod \"olm-operator-6b444d44fb-s8wh2\" (UID: \"a0bec766-1a35-4ba4-b2f0-5f624982b6dd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.650474 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.654312 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.658062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vthc\" (UniqueName: \"kubernetes.io/projected/abfae71c-faa4-4d70-989c-7a248d6730e0-kube-api-access-7vthc\") pod \"dns-default-nv45l\" (UID: \"abfae71c-faa4-4d70-989c-7a248d6730e0\") " pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.668784 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.672892 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5np\" (UniqueName: \"kubernetes.io/projected/4a04f0f8-5007-43d3-907b-d35f7e68b40f-kube-api-access-nn5np\") pod \"catalog-operator-68c6474976-682pg\" (UID: \"4a04f0f8-5007-43d3-907b-d35f7e68b40f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.675148 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.675622 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.175604764 +0000 UTC m=+142.873009543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.677901 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.678115 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.692078 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjnxf\" (UniqueName: \"kubernetes.io/projected/851ad942-5cf5-45a1-8b47-3c4adbfaef4a-kube-api-access-tjnxf\") pod \"package-server-manager-789f6589d5-bt4rc\" (UID: \"851ad942-5cf5-45a1-8b47-3c4adbfaef4a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.699842 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.711694 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jqlf\" (UniqueName: \"kubernetes.io/projected/f82cfa58-77b7-450f-b554-1db8ad48b250-kube-api-access-9jqlf\") pod \"kube-storage-version-migrator-operator-b67b599dd-7jk29\" (UID: \"f82cfa58-77b7-450f-b554-1db8ad48b250\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.712896 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.725236 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.728414 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.735298 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfkk\" (UniqueName: \"kubernetes.io/projected/21e76c25-ba6e-439a-8e9e-6650b7bda321-kube-api-access-ldfkk\") pod \"ingress-operator-5b745b69d9-bdz8r\" (UID: \"21e76c25-ba6e-439a-8e9e-6650b7bda321\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.740990 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.749692 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.755076 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.768625 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.771236 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mcbp\" (UniqueName: \"kubernetes.io/projected/e193cc0a-93bc-4f63-8b56-255209ee7c66-kube-api-access-6mcbp\") pod \"service-ca-9c57cc56f-bxbmw\" (UID: \"e193cc0a-93bc-4f63-8b56-255209ee7c66\") " pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.778694 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.778836 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.278796569 +0000 UTC m=+142.976201348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.778926 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.779154 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vm29q" event={"ID":"3da08adf-859a-4df3-84d6-842f8652b8c5","Type":"ContainerStarted","Data":"6442be49175a822e55587a457a157e36b1b4564ba0128c94e273f32e41b39e62"} Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.780042 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.280021289 +0000 UTC m=+142.977426068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.784696 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.791615 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.796774 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m4qn7" event={"ID":"e3c4cf13-f6af-4121-9feb-653a6abd396a","Type":"ContainerStarted","Data":"ed1a6967d8f896f763065d5dda988c2bdd3128a633ded0856aa5db6befb2fdb8"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.807418 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" event={"ID":"9f8d8348-6865-401f-aa66-63404d4a2869","Type":"ContainerStarted","Data":"e093a54258f449b821302187aa6eb3afa485e1c9e996d598cdf9befc5d8db2c5"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.831751 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" event={"ID":"483a94d2-3437-4165-a8c3-6a014b2dcea4","Type":"ContainerStarted","Data":"232566c8bf23f51ed49eac5857d2568dd7bcc074c0e1c13175b80dc2e1b0713a"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.843807 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" event={"ID":"42ba6016-7bd8-4ee0-9dd9-111f320e064f","Type":"ContainerStarted","Data":"c68feff1197bb7bb88b669fb311e1809bc88f1a2948425a121207fc4df9b1191"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.845810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" event={"ID":"48e2ebf7-77fd-43a3-9e8b-d89458a00707","Type":"ContainerStarted","Data":"26d2fdafd5686e68b91647dc2773bd8a5022e0b132f51540333930b7263d5467"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.846988 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" event={"ID":"5301827d-cd7a-4382-a397-51e3e115e834","Type":"ContainerStarted","Data":"6f71e89d090df2c528aefd8115ff71ce0e9beafa3ad2ccd9d7e5571fd5dafeef"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.847022 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" event={"ID":"5301827d-cd7a-4382-a397-51e3e115e834","Type":"ContainerStarted","Data":"2d49a90efbc067b7ed5dff621f1654ff80ae46030594079ed05e037310dad873"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.848518 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" event={"ID":"f07b6179-c5bd-4735-b0a6-37f6c8d402df","Type":"ContainerStarted","Data":"fe1d7b3b858a920023b394d47bed2c93e173a5d7fece38d79509b3a6a3efbc98"} Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.856337 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq"] Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.879589 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.880000 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.379979944 +0000 UTC m=+143.077384723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.950432 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f4ce1c_e4ff_4127_a3c9_a904623caeb9.slice/crio-b9c746d20ed1e5aff0040d91c259818cd2970ec2cd36d8c8540ef11db54ad222 WatchSource:0}: Error finding container b9c746d20ed1e5aff0040d91c259818cd2970ec2cd36d8c8540ef11db54ad222: Status 404 returned error can't find the container with id b9c746d20ed1e5aff0040d91c259818cd2970ec2cd36d8c8540ef11db54ad222 Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.952392 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce1d92fe_ce3d_46df_8c5a_1896d7115fdd.slice/crio-a7a610fc77ee261ee4fbfb2a7ee62252cb157af082e041ae64438137285f7e22 WatchSource:0}: Error finding container a7a610fc77ee261ee4fbfb2a7ee62252cb157af082e041ae64438137285f7e22: Status 404 returned error can't find the container with id a7a610fc77ee261ee4fbfb2a7ee62252cb157af082e041ae64438137285f7e22 Nov 23 06:52:38 crc kubenswrapper[5028]: W1123 06:52:38.959622 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod857ed9f1_ee3f_4d84_8945_71b7211bcf02.slice/crio-81b046b0b0a3ac4f5c317babc94c17d99d9eabec31f26b9bfd6e0b2591795b87 WatchSource:0}: Error finding container 81b046b0b0a3ac4f5c317babc94c17d99d9eabec31f26b9bfd6e0b2591795b87: Status 404 returned error can't find the container with id 81b046b0b0a3ac4f5c317babc94c17d99d9eabec31f26b9bfd6e0b2591795b87 Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.986666 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:38 crc kubenswrapper[5028]: E1123 06:52:38.987164 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.487149077 +0000 UTC m=+143.184553856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:38 crc kubenswrapper[5028]: I1123 06:52:38.990729 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.005184 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.031767 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.087593 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.087772 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.587722617 +0000 UTC m=+143.285127406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.088253 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.088660 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.58864694 +0000 UTC m=+143.286051719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.091645 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.125689 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.132578 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.146590 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7rpm6"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.148357 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.189733 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.189915 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.689896916 +0000 UTC m=+143.387301695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.190080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.190403 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.690395369 +0000 UTC m=+143.387800148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.292201 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.292528 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.792511627 +0000 UTC m=+143.489916406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.294554 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.319621 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-pppdd"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.326310 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q44fq"] Nov 23 06:52:39 crc kubenswrapper[5028]: W1123 06:52:39.333633 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64fac0c1_4e23_48c0_a162_f77370e3497e.slice/crio-61b420fe747794caedfa7aa20c02593534be6dfb978b7df685f313039304ba4e WatchSource:0}: Error finding container 61b420fe747794caedfa7aa20c02593534be6dfb978b7df685f313039304ba4e: Status 404 returned error can't find the container with id 61b420fe747794caedfa7aa20c02593534be6dfb978b7df685f313039304ba4e Nov 23 06:52:39 crc kubenswrapper[5028]: W1123 06:52:39.337778 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d4db5b1_ec96_442c_a106_3bdcddcaa3b5.slice/crio-eaa88de16bd45a37ca57c5abab59d44406547416db3b2d9654398438b46e9bf0 WatchSource:0}: Error finding container eaa88de16bd45a37ca57c5abab59d44406547416db3b2d9654398438b46e9bf0: Status 404 returned error can't find the container with id eaa88de16bd45a37ca57c5abab59d44406547416db3b2d9654398438b46e9bf0 Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.349664 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hbwhl"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.393915 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.394511 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.894496852 +0000 UTC m=+143.591901631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: W1123 06:52:39.445657 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod872d66c4_4f5a_4067_8aa5_5cb7b56b9f94.slice/crio-5e5615e89f47ca1ca54c0efe767cd890ed934e890372093e783902e978329196 WatchSource:0}: Error finding container 5e5615e89f47ca1ca54c0efe767cd890ed934e890372093e783902e978329196: Status 404 returned error can't find the container with id 5e5615e89f47ca1ca54c0efe767cd890ed934e890372093e783902e978329196 Nov 23 06:52:39 crc kubenswrapper[5028]: W1123 06:52:39.473857 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc33ea4b_0b9c_4f25_8a84_337153c350c1.slice/crio-8e8b7ff40040444f8a6078e37be74125f74be07022a7bee44c9cac411c7c1827 WatchSource:0}: Error finding container 8e8b7ff40040444f8a6078e37be74125f74be07022a7bee44c9cac411c7c1827: Status 404 returned error can't find the container with id 8e8b7ff40040444f8a6078e37be74125f74be07022a7bee44c9cac411c7c1827 Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.495211 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.496020 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:39.996001755 +0000 UTC m=+143.693406524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.562773 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jsktx"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.576621 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.578608 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-dgqrn"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.596730 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.597179 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.097163749 +0000 UTC m=+143.794568528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.689582 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx"] Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.702117 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.702464 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh"] Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.702541 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.202526217 +0000 UTC m=+143.899930996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: W1123 06:52:39.788430 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d1e6cc_d93d_4c83_82f3_3e84551beace.slice/crio-6ad929d102f9dbb3c81aca779977947e9a02334b870d680f6c700fa5e318e825 WatchSource:0}: Error finding container 6ad929d102f9dbb3c81aca779977947e9a02334b870d680f6c700fa5e318e825: Status 404 returned error can't find the container with id 6ad929d102f9dbb3c81aca779977947e9a02334b870d680f6c700fa5e318e825 Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.803308 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.803634 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.30362029 +0000 UTC m=+144.001025069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.896106 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" event={"ID":"dc33ea4b-0b9c-4f25-8a84-337153c350c1","Type":"ContainerStarted","Data":"8e8b7ff40040444f8a6078e37be74125f74be07022a7bee44c9cac411c7c1827"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.905986 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" event={"ID":"fcf9070d-93b3-468e-93c2-094b7fbe5a6b","Type":"ContainerStarted","Data":"f28b899f373f8d317327d0cccfa941fd9300dfaef063290d1067310f1b7049bc"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.906678 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.907057 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.40703469 +0000 UTC m=+144.104439469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.907175 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:39 crc kubenswrapper[5028]: E1123 06:52:39.907518 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.407508622 +0000 UTC m=+144.104913401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.911823 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vm29q" event={"ID":"3da08adf-859a-4df3-84d6-842f8652b8c5","Type":"ContainerStarted","Data":"ce095866d1d75f3c76db0983c7085c50634c069fcd4a265f60e43ed78ae8efd0"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.915375 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m4qn7" event={"ID":"e3c4cf13-f6af-4121-9feb-653a6abd396a","Type":"ContainerStarted","Data":"4c1a1433e4c64c0ea5603b811acc929e38fa873623b2d4bf9b6014130c929d58"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.916395 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.918274 5028 patch_prober.go:28] interesting pod/downloads-7954f5f757-m4qn7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.918344 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m4qn7" podUID="e3c4cf13-f6af-4121-9feb-653a6abd396a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.918926 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" event={"ID":"64fac0c1-4e23-48c0-a162-f77370e3497e","Type":"ContainerStarted","Data":"61b420fe747794caedfa7aa20c02593534be6dfb978b7df685f313039304ba4e"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.926883 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" event={"ID":"3a06ae06-c671-4329-885f-930b0847abac","Type":"ContainerStarted","Data":"b258d0dd16d6fb5078a4a79d58b486f7aaf136dc154855bd9b7c60a6c8b95989"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.937617 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dgqrn" event={"ID":"aeff5e8e-1eb5-49cd-b72c-b01283e487ca","Type":"ContainerStarted","Data":"100ccdd5307a4b932dd678b57416cc14f86790f8cb534091d208ed336e57acd8"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.955824 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" event={"ID":"857ed9f1-ee3f-4d84-8945-71b7211bcf02","Type":"ContainerStarted","Data":"81b046b0b0a3ac4f5c317babc94c17d99d9eabec31f26b9bfd6e0b2591795b87"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.982005 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" event={"ID":"bfc84a0c-b65b-4a10-b0f2-9811c348bda2","Type":"ContainerStarted","Data":"b2d490e927b108da343cd984fea335befe63d8c535da65c391616cedb593fb77"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.987142 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pppdd" event={"ID":"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94","Type":"ContainerStarted","Data":"5e5615e89f47ca1ca54c0efe767cd890ed934e890372093e783902e978329196"} Nov 23 06:52:39 crc kubenswrapper[5028]: I1123 06:52:39.993105 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" event={"ID":"2ca22d3a-be54-4425-ba62-490d86d77e02","Type":"ContainerStarted","Data":"cd86890f928da93f758ef6dfb23661f5d3799354a7146ab6fd3cc1891bc512ef"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.008104 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.011641 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.509456366 +0000 UTC m=+144.206861145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.019517 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.022972 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" event={"ID":"9f8d8348-6865-401f-aa66-63404d4a2869","Type":"ContainerStarted","Data":"448e3df6ca06ad3bb4e183719b1de23961f7a630a2f5d1ebb104f99c8f7ea9d5"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.026513 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-57wj4"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.049589 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" event={"ID":"42ba6016-7bd8-4ee0-9dd9-111f320e064f","Type":"ContainerStarted","Data":"d30c214994c6830e46fa01fbf0ee0cded86a4dd94d5125b7392832470f16cf18"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.049854 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.051810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" event={"ID":"b9eec920-6d0b-4064-8dd8-44b2f4fe722a","Type":"ContainerStarted","Data":"d3656ac5cda2ba55e5c1eb43290c940163fec629913a9569ade42ff71f1958fe"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.052295 5028 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jjtp4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.052329 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.055632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" event={"ID":"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160","Type":"ContainerStarted","Data":"7e41bca3f6797839d0bbf8ea463b010bd5dec1321fa105279528689824261dfe"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.056828 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" event={"ID":"fab3473d-0543-4160-8ad4-f262ec89e82b","Type":"ContainerStarted","Data":"23765d2c78605914154aecc0e02a35ba8844b8afdefaee9d229655383f11bf80"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.059474 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-j9tbl" event={"ID":"271c7a5f-b0ff-458f-804e-34922261cb06","Type":"ContainerStarted","Data":"4069ef9fb15764dc17a89289afc487a5784ccf032fe3261f4f58b7e78408b7cb"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.061606 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" event={"ID":"3baecb2d-3513-4920-a11b-18947bda4669","Type":"ContainerStarted","Data":"593c573de3dbf65bf96074fc53e2bf63ff4c166d796456df96a99115a7bb2d8b"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.069607 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" event={"ID":"b45d44a5-1077-40b8-8faf-0b206cdac95b","Type":"ContainerStarted","Data":"1791d81bce5789cda3e8b9edeacf3c9acccdbd18d10b75980bbc68687c65ce9f"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.071905 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" event={"ID":"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd","Type":"ContainerStarted","Data":"a7a610fc77ee261ee4fbfb2a7ee62252cb157af082e041ae64438137285f7e22"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.080508 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" event={"ID":"94d1e6cc-d93d-4c83-82f3-3e84551beace","Type":"ContainerStarted","Data":"6ad929d102f9dbb3c81aca779977947e9a02334b870d680f6c700fa5e318e825"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.094518 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" event={"ID":"1cfb9270-8022-420c-b9e3-ecdfed90d6bf","Type":"ContainerStarted","Data":"4d83e4535353395e8e008bb40819f9ce12fbd4d60cc86348fff139dbde6013e6"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.095784 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" event={"ID":"67f4ce1c-e4ff-4127-a3c9-a904623caeb9","Type":"ContainerStarted","Data":"b9c746d20ed1e5aff0040d91c259818cd2970ec2cd36d8c8540ef11db54ad222"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.103607 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" event={"ID":"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5","Type":"ContainerStarted","Data":"eaa88de16bd45a37ca57c5abab59d44406547416db3b2d9654398438b46e9bf0"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.110750 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.111965 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.611927053 +0000 UTC m=+144.309331892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.137082 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" event={"ID":"483a94d2-3437-4165-a8c3-6a014b2dcea4","Type":"ContainerStarted","Data":"e236e3c3875d3016e6bb387a0ff4a4e44496368c8dde8b01df1aa58e463d4cf9"} Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.138041 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.147457 5028 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tcdk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.147522 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.159258 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.167056 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzg2c"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.170592 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.180851 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.190257 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.192762 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.212291 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.212358 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.712336249 +0000 UTC m=+144.409741028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.214147 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.214635 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.714610245 +0000 UTC m=+144.412015024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.246226 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.247218 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.247281 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.315380 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.316409 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.816390065 +0000 UTC m=+144.513794844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.369450 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr"] Nov 23 06:52:40 crc kubenswrapper[5028]: W1123 06:52:40.387864 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod012b6f83_ae0e_4a83_b806_6634bb4c1f4a.slice/crio-86540e52dd48ff3ce80fbf8ddf9821521e3dc03d4483fbd8744db9a93dd54c5b WatchSource:0}: Error finding container 86540e52dd48ff3ce80fbf8ddf9821521e3dc03d4483fbd8744db9a93dd54c5b: Status 404 returned error can't find the container with id 86540e52dd48ff3ce80fbf8ddf9821521e3dc03d4483fbd8744db9a93dd54c5b Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.397133 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.400655 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bxbmw"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.406063 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nv45l"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.408039 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29"] Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.417262 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.417733 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:40.917704583 +0000 UTC m=+144.615109362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: W1123 06:52:40.485093 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e34e5a9_95a3_43d5_8c81_ed837e907109.slice/crio-7cf71e3c51987029cdc7fca4acb45aa09082a3c4c03ebb1211e0f9dd019a5657 WatchSource:0}: Error finding container 7cf71e3c51987029cdc7fca4acb45aa09082a3c4c03ebb1211e0f9dd019a5657: Status 404 returned error can't find the container with id 7cf71e3c51987029cdc7fca4acb45aa09082a3c4c03ebb1211e0f9dd019a5657 Nov 23 06:52:40 crc kubenswrapper[5028]: W1123 06:52:40.489331 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0bec766_1a35_4ba4_b2f0_5f624982b6dd.slice/crio-b4e1136eb303c4f77713d887cda0184fdfe54482d270fb3eb5a3086dec988f3d WatchSource:0}: Error finding container b4e1136eb303c4f77713d887cda0184fdfe54482d270fb3eb5a3086dec988f3d: Status 404 returned error can't find the container with id b4e1136eb303c4f77713d887cda0184fdfe54482d270fb3eb5a3086dec988f3d Nov 23 06:52:40 crc kubenswrapper[5028]: W1123 06:52:40.499101 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d14055c_94d0_4e92_a453_05c96dd4387b.slice/crio-c1291dfcddf7017da99cf0746ce4b77b2f4ee76b1c3092dfa0928e1964511715 WatchSource:0}: Error finding container c1291dfcddf7017da99cf0746ce4b77b2f4ee76b1c3092dfa0928e1964511715: Status 404 returned error can't find the container with id c1291dfcddf7017da99cf0746ce4b77b2f4ee76b1c3092dfa0928e1964511715 Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.518105 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.518524 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.018502588 +0000 UTC m=+144.715907367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.518744 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.519086 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.019075793 +0000 UTC m=+144.716480572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: W1123 06:52:40.532062 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a04f0f8_5007_43d3_907b_d35f7e68b40f.slice/crio-37452cd8f21caf36b6b255a8822f95d6dd398e29a266790830038d1d4a499516 WatchSource:0}: Error finding container 37452cd8f21caf36b6b255a8822f95d6dd398e29a266790830038d1d4a499516: Status 404 returned error can't find the container with id 37452cd8f21caf36b6b255a8822f95d6dd398e29a266790830038d1d4a499516 Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.547670 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ct4c7" podStartSLOduration=118.547652147 podStartE2EDuration="1m58.547652147s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:40.546174861 +0000 UTC m=+144.243579640" watchObservedRunningTime="2025-11-23 06:52:40.547652147 +0000 UTC m=+144.245056926" Nov 23 06:52:40 crc kubenswrapper[5028]: W1123 06:52:40.602685 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabfae71c_faa4_4d70_989c_7a248d6730e0.slice/crio-02e30d9d0a3d3507e69a86960f222cef62e0d30e3860612d0b2c01d1d6885724 WatchSource:0}: Error finding container 02e30d9d0a3d3507e69a86960f222cef62e0d30e3860612d0b2c01d1d6885724: Status 404 returned error can't find the container with id 02e30d9d0a3d3507e69a86960f222cef62e0d30e3860612d0b2c01d1d6885724 Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.619586 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.620356 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.120341519 +0000 UTC m=+144.817746298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.625143 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9rrf" podStartSLOduration=118.625126257 podStartE2EDuration="1m58.625126257s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:40.624701256 +0000 UTC m=+144.322106035" watchObservedRunningTime="2025-11-23 06:52:40.625126257 +0000 UTC m=+144.322531036" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.663238 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" podStartSLOduration=118.663223396 podStartE2EDuration="1m58.663223396s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:40.662024177 +0000 UTC m=+144.359428956" watchObservedRunningTime="2025-11-23 06:52:40.663223396 +0000 UTC m=+144.360628175" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.721920 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.722289 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.222262422 +0000 UTC m=+144.919667201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.779716 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vm29q" podStartSLOduration=118.779696988 podStartE2EDuration="1m58.779696988s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:40.736267657 +0000 UTC m=+144.433672436" watchObservedRunningTime="2025-11-23 06:52:40.779696988 +0000 UTC m=+144.477101767" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.781006 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" podStartSLOduration=118.78099875 podStartE2EDuration="1m58.78099875s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:40.777239318 +0000 UTC m=+144.474644107" watchObservedRunningTime="2025-11-23 06:52:40.78099875 +0000 UTC m=+144.478403519" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.793888 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-m4qn7" podStartSLOduration=118.793869638 podStartE2EDuration="1m58.793869638s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:40.793284483 +0000 UTC m=+144.490689262" watchObservedRunningTime="2025-11-23 06:52:40.793869638 +0000 UTC m=+144.491274417" Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.823413 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.823779 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.323758965 +0000 UTC m=+145.021163744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:40 crc kubenswrapper[5028]: I1123 06:52:40.925140 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:40 crc kubenswrapper[5028]: E1123 06:52:40.925452 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.425440782 +0000 UTC m=+145.122845561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.026194 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.026899 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.526883884 +0000 UTC m=+145.224288663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.127762 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.128127 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.62811255 +0000 UTC m=+145.325517329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.141691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" event={"ID":"48e2ebf7-77fd-43a3-9e8b-d89458a00707","Type":"ContainerStarted","Data":"6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.142155 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.143244 5028 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wfgw7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.143294 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.143849 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" event={"ID":"3baecb2d-3513-4920-a11b-18947bda4669","Type":"ContainerStarted","Data":"292517d31aa8c092eb5aa5b9285d11dc1577e9cdbf80d71fe154c3d66ac2c4df"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.145030 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" event={"ID":"f82cfa58-77b7-450f-b554-1db8ad48b250","Type":"ContainerStarted","Data":"a4f6e5904d5bc76c8a0aa4a2ed7a69f411901aee953532583e67a1a5d3d90f8c"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.146488 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" event={"ID":"857ed9f1-ee3f-4d84-8945-71b7211bcf02","Type":"ContainerStarted","Data":"bd8c7ae5cbe2fbdb1c71757920f94d03473b5e557e9394ee6adb4c70683418bd"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.148112 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" event={"ID":"bfc84a0c-b65b-4a10-b0f2-9811c348bda2","Type":"ContainerStarted","Data":"8aa0e0bc40dad213773747b19d4ac19c4182aae673d95e20892f213cb8f61afc"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.150353 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" event={"ID":"e193cc0a-93bc-4f63-8b56-255209ee7c66","Type":"ContainerStarted","Data":"951d3647ce4a3fef6e6f9292830e059c97e09ff54ef3c9042544a8166a18deca"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.151701 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-j9tbl" event={"ID":"271c7a5f-b0ff-458f-804e-34922261cb06","Type":"ContainerStarted","Data":"520b4df29d522210de28c6aaef086168142c16eb71a93997dd54edfce9818dc7"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.152890 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" event={"ID":"b9eec920-6d0b-4064-8dd8-44b2f4fe722a","Type":"ContainerStarted","Data":"9cb79e3e18d25fc57ba786e3eea1ffbe40b062af1b9b8b9766b6bba9bf036832"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.154641 5028 generic.go:334] "Generic (PLEG): container finished" podID="64fac0c1-4e23-48c0-a162-f77370e3497e" containerID="0068008a52fcfb24c24ea9dcdeb2fac1684ec5ab9dfd9e0dd9bb7a8a09f3d25d" exitCode=0 Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.154698 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" event={"ID":"64fac0c1-4e23-48c0-a162-f77370e3497e","Type":"ContainerDied","Data":"0068008a52fcfb24c24ea9dcdeb2fac1684ec5ab9dfd9e0dd9bb7a8a09f3d25d"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.155633 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" event={"ID":"2220b96b-60ef-46c9-850e-4e7a38727019","Type":"ContainerStarted","Data":"04a52ae8ed42839b6f002fc30a47e77fdbacfbf68b3647e2f809bd7578f65cc5"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.156629 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" event={"ID":"a0bec766-1a35-4ba4-b2f0-5f624982b6dd","Type":"ContainerStarted","Data":"b4e1136eb303c4f77713d887cda0184fdfe54482d270fb3eb5a3086dec988f3d"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.157737 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" event={"ID":"67f4ce1c-e4ff-4127-a3c9-a904623caeb9","Type":"ContainerStarted","Data":"92ae73b3724d51931f0833beec85353964546e811be95a0014a48987ffe2cab1"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.158591 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" event={"ID":"21e76c25-ba6e-439a-8e9e-6650b7bda321","Type":"ContainerStarted","Data":"d0e1e57f22584e0d8f979a1218e5b2bc0fe1a250826b27bcc5985e659bbd2c77"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.161007 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" event={"ID":"9f8d8348-6865-401f-aa66-63404d4a2869","Type":"ContainerStarted","Data":"ce72a0e615a30a81b8fa5dbbfb4732ad4ac42b2b1f82d0a4e8f012d92201c86a"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.162996 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" event={"ID":"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd","Type":"ContainerStarted","Data":"727fe40436925feb9052e70ff47a6e06881dd71e1527edb8ff28fdab2705ec88"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.166399 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" event={"ID":"6e34e5a9-95a3-43d5-8c81-ed837e907109","Type":"ContainerStarted","Data":"7cf71e3c51987029cdc7fca4acb45aa09082a3c4c03ebb1211e0f9dd019a5657"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.167711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" event={"ID":"dc33ea4b-0b9c-4f25-8a84-337153c350c1","Type":"ContainerStarted","Data":"48114ce75981080ec5b79b038c64093a34cec2c745b759ab7b70f3f6b16353f1"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.168335 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.169683 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pppdd" event={"ID":"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94","Type":"ContainerStarted","Data":"6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.172345 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nv45l" event={"ID":"abfae71c-faa4-4d70-989c-7a248d6730e0","Type":"ContainerStarted","Data":"02e30d9d0a3d3507e69a86960f222cef62e0d30e3860612d0b2c01d1d6885724"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.172834 5028 patch_prober.go:28] interesting pod/console-operator-58897d9998-hbwhl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.172874 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" podUID="dc33ea4b-0b9c-4f25-8a84-337153c350c1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.173986 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" event={"ID":"2ca22d3a-be54-4425-ba62-490d86d77e02","Type":"ContainerStarted","Data":"e364fdd9efe7a8c073c99a3d4440dc2ed5df67ecb01df498d2e10f05a3a366a4"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.175216 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" event={"ID":"ad4766e1-6147-47be-8a4a-bb52d8370962","Type":"ContainerStarted","Data":"9cc580ba01816a8e6e71cd8b6c4cbd2358571e29ae0561b6a3cdb45483b66fa0"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.175918 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" event={"ID":"012b6f83-ae0e-4a83-b806-6634bb4c1f4a","Type":"ContainerStarted","Data":"86540e52dd48ff3ce80fbf8ddf9821521e3dc03d4483fbd8744db9a93dd54c5b"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.176662 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" event={"ID":"4a04f0f8-5007-43d3-907b-d35f7e68b40f","Type":"ContainerStarted","Data":"37452cd8f21caf36b6b255a8822f95d6dd398e29a266790830038d1d4a499516"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.177598 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-dgqrn" event={"ID":"aeff5e8e-1eb5-49cd-b72c-b01283e487ca","Type":"ContainerStarted","Data":"40c5e03c40d1433d7c56bb11b591878869e2744f6dd70298e21d110d30bfddcc"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.178269 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" event={"ID":"ce1e72c5-9d4f-47ff-805d-921034752820","Type":"ContainerStarted","Data":"fe996479e3b9d2720489b99db9fad25bd9dadee7648169acfe2fe58801e71cd3"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.179391 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-j9tbl" podStartSLOduration=6.179364564 podStartE2EDuration="6.179364564s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.174839492 +0000 UTC m=+144.872244271" watchObservedRunningTime="2025-11-23 06:52:41.179364564 +0000 UTC m=+144.876769343" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.179805 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" event={"ID":"0d14055c-94d0-4e92-a453-05c96dd4387b","Type":"ContainerStarted","Data":"c1291dfcddf7017da99cf0746ce4b77b2f4ee76b1c3092dfa0928e1964511715"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.180010 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" podStartSLOduration=119.179820165 podStartE2EDuration="1m59.179820165s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.161310768 +0000 UTC m=+144.858715557" watchObservedRunningTime="2025-11-23 06:52:41.179820165 +0000 UTC m=+144.877224944" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.181113 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" event={"ID":"851ad942-5cf5-45a1-8b47-3c4adbfaef4a","Type":"ContainerStarted","Data":"d5762fe3ed6942b1d35471314c8cabe35f5761b98908857af1ae71cea066a6b7"} Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.181652 5028 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tcdk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.181708 5028 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jjtp4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.181710 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.181745 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.181907 5028 patch_prober.go:28] interesting pod/downloads-7954f5f757-m4qn7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.182018 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m4qn7" podUID="e3c4cf13-f6af-4121-9feb-653a6abd396a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.190696 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ng58c" podStartSLOduration=119.190677453 podStartE2EDuration="1m59.190677453s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.188918649 +0000 UTC m=+144.886323428" watchObservedRunningTime="2025-11-23 06:52:41.190677453 +0000 UTC m=+144.888082232" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.207625 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hb2pv" podStartSLOduration=119.20760437 podStartE2EDuration="1m59.20760437s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.204651577 +0000 UTC m=+144.902056356" watchObservedRunningTime="2025-11-23 06:52:41.20760437 +0000 UTC m=+144.905009149" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.219472 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2n5wq" podStartSLOduration=119.219455882 podStartE2EDuration="1m59.219455882s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.21653217 +0000 UTC m=+144.913936949" watchObservedRunningTime="2025-11-23 06:52:41.219455882 +0000 UTC m=+144.916860661" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.228789 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.229015 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.728981797 +0000 UTC m=+145.426386576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.229075 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.229577 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.729565582 +0000 UTC m=+145.426970361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.237894 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-blqlb" podStartSLOduration=119.237877357 podStartE2EDuration="1m59.237877357s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.235537239 +0000 UTC m=+144.932942018" watchObservedRunningTime="2025-11-23 06:52:41.237877357 +0000 UTC m=+144.935282136" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.250419 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.250473 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.258429 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmtfk" podStartSLOduration=119.258413193 podStartE2EDuration="1m59.258413193s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.257859029 +0000 UTC m=+144.955263808" watchObservedRunningTime="2025-11-23 06:52:41.258413193 +0000 UTC m=+144.955817972" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.277502 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" podStartSLOduration=119.277478163 podStartE2EDuration="1m59.277478163s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:41.27656021 +0000 UTC m=+144.973964979" watchObservedRunningTime="2025-11-23 06:52:41.277478163 +0000 UTC m=+144.974882942" Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.331423 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.333028 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.833009152 +0000 UTC m=+145.530413931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.433396 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.434076 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:41.934051524 +0000 UTC m=+145.631456343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.534674 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.535198 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.035179098 +0000 UTC m=+145.732583887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.636047 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.636451 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.136437845 +0000 UTC m=+145.833842624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.737486 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.737625 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.237607289 +0000 UTC m=+145.935012068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.737892 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.738228 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.238218994 +0000 UTC m=+145.935623773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.842935 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.843813 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.343794918 +0000 UTC m=+146.041199697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:41 crc kubenswrapper[5028]: I1123 06:52:41.945164 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:41 crc kubenswrapper[5028]: E1123 06:52:41.945617 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.445593868 +0000 UTC m=+146.142998647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.046710 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.047354 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.547326167 +0000 UTC m=+146.244730946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.148881 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.149388 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.649371083 +0000 UTC m=+146.346775862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.186288 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" event={"ID":"3a06ae06-c671-4329-885f-930b0847abac","Type":"ContainerStarted","Data":"2198a88db2b00b0a0b32b20de6f9d4c6c3208172bc6d5bbeabc4aa1a295082e6"} Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.189760 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" event={"ID":"1cfb9270-8022-420c-b9e3-ecdfed90d6bf","Type":"ContainerStarted","Data":"9e90820ab83b4052fdba94cedfdf61caae4ab52d8709d620d41807c3304ca147"} Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.192125 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" event={"ID":"1d4db5b1-ec96-442c-a106-3bdcddcaa3b5","Type":"ContainerStarted","Data":"aa2e937f6ec2278e801ca4c1027501bc0dca439bf4eb3667e28d217ee0b8a791"} Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.194357 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" event={"ID":"3d7b319c-9cbb-4d0b-9c27-ce83e4dc6160","Type":"ContainerStarted","Data":"c098563ea89671971d18b960cbb67f1cc5d040881eaf1f576fb01900e69eb108"} Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.195614 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" event={"ID":"fcf9070d-93b3-468e-93c2-094b7fbe5a6b","Type":"ContainerStarted","Data":"549b857d124f49015d4c6b7cb0cdbdb3cf1d69f727cb79eced6da2c0ab79cfd0"} Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.196507 5028 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wfgw7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.196555 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.196687 5028 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tcdk5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.196735 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.196688 5028 patch_prober.go:28] interesting pod/console-operator-58897d9998-hbwhl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.196778 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" podUID="dc33ea4b-0b9c-4f25-8a84-337153c350c1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.247809 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.247895 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.250333 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.250544 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.750514337 +0000 UTC m=+146.447919116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.250713 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.251120 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.751106822 +0000 UTC m=+146.448511601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.352276 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.352447 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.8524144 +0000 UTC m=+146.549819179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.353320 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.354934 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.854910111 +0000 UTC m=+146.552314960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.454943 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.455171 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.955137623 +0000 UTC m=+146.652542392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.455326 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.455660 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:42.955644235 +0000 UTC m=+146.653049024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.556025 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.556196 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.056180414 +0000 UTC m=+146.753585193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.556302 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.556602 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.056594455 +0000 UTC m=+146.753999224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.656876 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.657599 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.157582115 +0000 UTC m=+146.854986884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.758750 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.759138 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.259125429 +0000 UTC m=+146.956530208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.859382 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.859857 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.359836172 +0000 UTC m=+147.057240951 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:42 crc kubenswrapper[5028]: I1123 06:52:42.962875 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:42 crc kubenswrapper[5028]: E1123 06:52:42.963341 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.463321474 +0000 UTC m=+147.160726253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.064453 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.065463 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.565440502 +0000 UTC m=+147.262845281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.166137 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.168166 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.668153295 +0000 UTC m=+147.365558074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.212315 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" event={"ID":"0d14055c-94d0-4e92-a453-05c96dd4387b","Type":"ContainerStarted","Data":"f86ace17ac0906a317f0aeb9f9e4b3f5e101f0789edd285193d309e7925de39d"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.215455 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" event={"ID":"b45d44a5-1077-40b8-8faf-0b206cdac95b","Type":"ContainerStarted","Data":"b41244f1bedfddee8f676dfbe4865b0b5c05ece704eb3e8f009db800d06534c4"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.218191 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" event={"ID":"fab3473d-0543-4160-8ad4-f262ec89e82b","Type":"ContainerStarted","Data":"26b150f57bfa4845a597b6145a89ff3ae12a7c21fc452e0f64daba97c9f81293"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.220253 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" event={"ID":"2220b96b-60ef-46c9-850e-4e7a38727019","Type":"ContainerStarted","Data":"fe0b757b2c57a47611300f283e854e068001983229e4da188420b66d91cfc508"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.222841 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" event={"ID":"64fac0c1-4e23-48c0-a162-f77370e3497e","Type":"ContainerStarted","Data":"e802517d70ac8f8a7de798bcf22a0bdbaaa56c03c036ca2f77d92164ba6af7db"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.224110 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" event={"ID":"f82cfa58-77b7-450f-b554-1db8ad48b250","Type":"ContainerStarted","Data":"97b362b2e73fb203422d6172b3c5d853dfb7f02a68a9c96d3f5fed78d6ce2971"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.226650 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" event={"ID":"ce1d92fe-ce3d-46df-8c5a-1896d7115fdd","Type":"ContainerStarted","Data":"309286222a0cdd62a0d08a603321fbf9b75a72a3913ecf50685a46ed9ee80146"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.228633 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" event={"ID":"4a04f0f8-5007-43d3-907b-d35f7e68b40f","Type":"ContainerStarted","Data":"5ca2ed38636c3b7f5f063a56f07ebdb0430f7417a3cba13d801e80c6c9f67762"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.229454 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.231455 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" event={"ID":"e193cc0a-93bc-4f63-8b56-255209ee7c66","Type":"ContainerStarted","Data":"df02d7f1b5f9cf63c06530e277c24f1eeac66d7f34fc2e68b6965f51df726633"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.243693 5028 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-682pg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.243751 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" podUID="4a04f0f8-5007-43d3-907b-d35f7e68b40f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.262189 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:43 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:43 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:43 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.262257 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.270831 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.273077 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.773048412 +0000 UTC m=+147.470453181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.275542 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" podStartSLOduration=121.275520213 podStartE2EDuration="2m1.275520213s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.272773935 +0000 UTC m=+146.970178724" watchObservedRunningTime="2025-11-23 06:52:43.275520213 +0000 UTC m=+146.972924982" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.277303 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-q44fq" podStartSLOduration=121.277295806 podStartE2EDuration="2m1.277295806s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.253419378 +0000 UTC m=+146.950824157" watchObservedRunningTime="2025-11-23 06:52:43.277295806 +0000 UTC m=+146.974700585" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.280064 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" event={"ID":"ce1e72c5-9d4f-47ff-805d-921034752820","Type":"ContainerStarted","Data":"4d77dd71de812c6df10c99ec974a4639d39a3ed18273efda7bbfad7bca1d87c4"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.301459 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7jk29" podStartSLOduration=121.301442772 podStartE2EDuration="2m1.301442772s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.298567581 +0000 UTC m=+146.995972360" watchObservedRunningTime="2025-11-23 06:52:43.301442772 +0000 UTC m=+146.998847551" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.309824 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" event={"ID":"21e76c25-ba6e-439a-8e9e-6650b7bda321","Type":"ContainerStarted","Data":"44a5087c0b9fcc58361609c95f1e151273d6e5c2fe6c67028efeeafdddf1a37a"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.319104 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" event={"ID":"b9eec920-6d0b-4064-8dd8-44b2f4fe722a","Type":"ContainerStarted","Data":"edbf294e617b58333e19c5827eb06cd01a554acd943a6753fdb54a07a236fd9e"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.349675 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nvfz9" podStartSLOduration=121.34964466 podStartE2EDuration="2m1.34964466s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.345111069 +0000 UTC m=+147.042515848" watchObservedRunningTime="2025-11-23 06:52:43.34964466 +0000 UTC m=+147.047049439" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.360497 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nv45l" event={"ID":"abfae71c-faa4-4d70-989c-7a248d6730e0","Type":"ContainerStarted","Data":"37a50a56014eb938afedc9a49b0110f875e80fc5c1298f5d73302a567c14c219"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.370464 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" event={"ID":"851ad942-5cf5-45a1-8b47-3c4adbfaef4a","Type":"ContainerStarted","Data":"77a19e906cd1351c759c2d57e85098a91b0ac495285447e7183eeac6c3bf6a78"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.376038 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" event={"ID":"a0bec766-1a35-4ba4-b2f0-5f624982b6dd","Type":"ContainerStarted","Data":"0dae07fa4539b0e26ec938ff370eaefa4c9e198ba7995cd1ca3c3a67ad7f1f60"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.376169 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" podStartSLOduration=121.376148914 podStartE2EDuration="2m1.376148914s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.374431632 +0000 UTC m=+147.071836421" watchObservedRunningTime="2025-11-23 06:52:43.376148914 +0000 UTC m=+147.073553693" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.376597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.377352 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.381389 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.881368393 +0000 UTC m=+147.578773172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.385798 5028 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-s8wh2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.385876 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" podUID="a0bec766-1a35-4ba4-b2f0-5f624982b6dd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.393393 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" event={"ID":"ad4766e1-6147-47be-8a4a-bb52d8370962","Type":"ContainerStarted","Data":"0d130addc87043e65b62950ff8c0876e2cc0937787fc5176744314033c590952"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.412130 5028 generic.go:334] "Generic (PLEG): container finished" podID="6e34e5a9-95a3-43d5-8c81-ed837e907109" containerID="0355b56923e9ee41ae9b083b49a9943001965e5a2b0675fa52d086b665dfebe7" exitCode=0 Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.412543 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" event={"ID":"6e34e5a9-95a3-43d5-8c81-ed837e907109","Type":"ContainerDied","Data":"0355b56923e9ee41ae9b083b49a9943001965e5a2b0675fa52d086b665dfebe7"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.422622 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gc5xr" podStartSLOduration=121.422599419 podStartE2EDuration="2m1.422599419s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.422267611 +0000 UTC m=+147.119672410" watchObservedRunningTime="2025-11-23 06:52:43.422599419 +0000 UTC m=+147.120004198" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.423881 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bxbmw" podStartSLOduration=121.423871671 podStartE2EDuration="2m1.423871671s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.402087924 +0000 UTC m=+147.099492723" watchObservedRunningTime="2025-11-23 06:52:43.423871671 +0000 UTC m=+147.121276450" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.428011 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" event={"ID":"012b6f83-ae0e-4a83-b806-6634bb4c1f4a","Type":"ContainerStarted","Data":"4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44"} Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.428557 5028 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wfgw7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.435134 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.428793 5028 patch_prober.go:28] interesting pod/console-operator-58897d9998-hbwhl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.435201 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" podUID="dc33ea4b-0b9c-4f25-8a84-337153c350c1" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.447479 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" podStartSLOduration=121.447449962 podStartE2EDuration="2m1.447449962s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.446697404 +0000 UTC m=+147.144102183" watchObservedRunningTime="2025-11-23 06:52:43.447449962 +0000 UTC m=+147.144854741" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.473504 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rghr8" podStartSLOduration=121.473482584 podStartE2EDuration="2m1.473482584s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.471831013 +0000 UTC m=+147.169235792" watchObservedRunningTime="2025-11-23 06:52:43.473482584 +0000 UTC m=+147.170887353" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.479309 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.481039 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:43.98100383 +0000 UTC m=+147.678408609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.494040 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" podStartSLOduration=121.494023731 podStartE2EDuration="2m1.494023731s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.491890048 +0000 UTC m=+147.189294827" watchObservedRunningTime="2025-11-23 06:52:43.494023731 +0000 UTC m=+147.191428510" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.518698 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-pppdd" podStartSLOduration=121.518666858 podStartE2EDuration="2m1.518666858s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.513288846 +0000 UTC m=+147.210693645" watchObservedRunningTime="2025-11-23 06:52:43.518666858 +0000 UTC m=+147.216071637" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.539816 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tbbmb" podStartSLOduration=121.539789269 podStartE2EDuration="2m1.539789269s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.534770255 +0000 UTC m=+147.232175024" watchObservedRunningTime="2025-11-23 06:52:43.539789269 +0000 UTC m=+147.237194048" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.560029 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-kkknq" podStartSLOduration=121.559989407 podStartE2EDuration="2m1.559989407s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.552684057 +0000 UTC m=+147.250088836" watchObservedRunningTime="2025-11-23 06:52:43.559989407 +0000 UTC m=+147.257394176" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.582769 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.583671 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.08363242 +0000 UTC m=+147.781037389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.610412 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jnb2v" podStartSLOduration=121.6103756 podStartE2EDuration="2m1.6103756s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.581078667 +0000 UTC m=+147.278483446" watchObservedRunningTime="2025-11-23 06:52:43.6103756 +0000 UTC m=+147.307780399" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.634215 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-dgqrn" podStartSLOduration=8.634188937 podStartE2EDuration="8.634188937s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.632553087 +0000 UTC m=+147.329957886" watchObservedRunningTime="2025-11-23 06:52:43.634188937 +0000 UTC m=+147.331593716" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.641034 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" podStartSLOduration=121.641010495 podStartE2EDuration="2m1.641010495s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:43.610675887 +0000 UTC m=+147.308080676" watchObservedRunningTime="2025-11-23 06:52:43.641010495 +0000 UTC m=+147.338415274" Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.684301 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.685143 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.185111403 +0000 UTC m=+147.882516182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.789665 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.790077 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.290061631 +0000 UTC m=+147.987466410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.890929 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.891070 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.391047711 +0000 UTC m=+148.088452490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.892083 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.892473 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.392462756 +0000 UTC m=+148.089867525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.993658 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.993808 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.493778364 +0000 UTC m=+148.191183143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:43 crc kubenswrapper[5028]: I1123 06:52:43.993892 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:43 crc kubenswrapper[5028]: E1123 06:52:43.994228 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.494218275 +0000 UTC m=+148.191623114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.094996 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.095308 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.595292537 +0000 UTC m=+148.292697316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.196493 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.196868 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.696854141 +0000 UTC m=+148.394258920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.248902 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:44 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:44 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:44 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.248989 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.297692 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.298426 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.798410665 +0000 UTC m=+148.495815444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.400208 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.400680 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:44.900643286 +0000 UTC m=+148.598048065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.437005 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" event={"ID":"2220b96b-60ef-46c9-850e-4e7a38727019","Type":"ContainerDied","Data":"fe0b757b2c57a47611300f283e854e068001983229e4da188420b66d91cfc508"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.436850 5028 generic.go:334] "Generic (PLEG): container finished" podID="2220b96b-60ef-46c9-850e-4e7a38727019" containerID="fe0b757b2c57a47611300f283e854e068001983229e4da188420b66d91cfc508" exitCode=0 Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.440241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" event={"ID":"3a06ae06-c671-4329-885f-930b0847abac","Type":"ContainerStarted","Data":"b0714bfdc3cd8e61abaa3ad4e10974ca1cac7fff24a411fd9c0ea2ef59160dcf"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.441970 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" event={"ID":"ad4766e1-6147-47be-8a4a-bb52d8370962","Type":"ContainerStarted","Data":"27918f905d034f67e4c34b3bb466174d2d1e94b5b76f616691aed4b99e957bad"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.444502 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" event={"ID":"21e76c25-ba6e-439a-8e9e-6650b7bda321","Type":"ContainerStarted","Data":"9da9e26d3f4d5652abcec1988a042c966a7a323b200d6cef63c2ba7bf64305ac"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.446648 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" event={"ID":"6e34e5a9-95a3-43d5-8c81-ed837e907109","Type":"ContainerStarted","Data":"04876dfcdef5b5bacf13d1b728ae5c7bb515f8e55fb0a5707a41b75c12bfb7df"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.448937 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nv45l" event={"ID":"abfae71c-faa4-4d70-989c-7a248d6730e0","Type":"ContainerStarted","Data":"206e63f3f3e4445b245007eaba1ad5a33302aa067a46274324e4c049169288d5"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.449544 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.451465 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" event={"ID":"64fac0c1-4e23-48c0-a162-f77370e3497e","Type":"ContainerStarted","Data":"809baafe834af218c59ffacaa6c1b91da0cf0a4f55f2760073765ddd7a8a9c22"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.453230 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" event={"ID":"0d14055c-94d0-4e92-a453-05c96dd4387b","Type":"ContainerStarted","Data":"4307f562a756070e730d9c4422a9c405a3232a3c51c9aa85d2a79aea2b6c0e70"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.455791 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" event={"ID":"1cfb9270-8022-420c-b9e3-ecdfed90d6bf","Type":"ContainerStarted","Data":"e823876e6663eafb2ab0404207cac86c602ddccbe3ae28c6776fa43bc3869700"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.469136 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" event={"ID":"851ad942-5cf5-45a1-8b47-3c4adbfaef4a","Type":"ContainerStarted","Data":"e5bdf0cb64a0c096e554e204bea1f612bed339e25573f324c885787a14b3581e"} Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.470900 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.472451 5028 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzg2c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.472500 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.473382 5028 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-s8wh2 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.473408 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" podUID="a0bec766-1a35-4ba4-b2f0-5f624982b6dd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.473457 5028 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-682pg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.473471 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" podUID="4a04f0f8-5007-43d3-907b-d35f7e68b40f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.490656 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nv45l" podStartSLOduration=9.490639285 podStartE2EDuration="9.490639285s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.490625595 +0000 UTC m=+148.188030374" watchObservedRunningTime="2025-11-23 06:52:44.490639285 +0000 UTC m=+148.188044064" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.501858 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.502290 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.002267772 +0000 UTC m=+148.699672551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.553579 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" podStartSLOduration=122.553560577 podStartE2EDuration="2m2.553560577s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.522345707 +0000 UTC m=+148.219750486" watchObservedRunningTime="2025-11-23 06:52:44.553560577 +0000 UTC m=+148.250965356" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.578925 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bdz8r" podStartSLOduration=122.578902092 podStartE2EDuration="2m2.578902092s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.555470934 +0000 UTC m=+148.252875713" watchObservedRunningTime="2025-11-23 06:52:44.578902092 +0000 UTC m=+148.276306871" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.580526 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rkqnx" podStartSLOduration=122.580516031 podStartE2EDuration="2m2.580516031s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.57803822 +0000 UTC m=+148.275443009" watchObservedRunningTime="2025-11-23 06:52:44.580516031 +0000 UTC m=+148.277920810" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.601621 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qlj5c" podStartSLOduration=122.601601451 podStartE2EDuration="2m2.601601451s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.60153648 +0000 UTC m=+148.298941259" watchObservedRunningTime="2025-11-23 06:52:44.601601451 +0000 UTC m=+148.299006230" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.605171 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.615291 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.115262008 +0000 UTC m=+148.812666787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.629578 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-57wj4" podStartSLOduration=122.629560251 podStartE2EDuration="2m2.629560251s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.62751517 +0000 UTC m=+148.324919949" watchObservedRunningTime="2025-11-23 06:52:44.629560251 +0000 UTC m=+148.326965030" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.664782 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" podStartSLOduration=122.664764709 podStartE2EDuration="2m2.664764709s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.661543789 +0000 UTC m=+148.358948568" watchObservedRunningTime="2025-11-23 06:52:44.664764709 +0000 UTC m=+148.362169488" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.706433 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.706846 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.206801086 +0000 UTC m=+148.904205865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.707036 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.707364 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.207329959 +0000 UTC m=+148.904734738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.728723 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" podStartSLOduration=122.728695815 podStartE2EDuration="2m2.728695815s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.727828434 +0000 UTC m=+148.425233213" watchObservedRunningTime="2025-11-23 06:52:44.728695815 +0000 UTC m=+148.426100594" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.729053 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xndhh" podStartSLOduration=122.729046084 podStartE2EDuration="2m2.729046084s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:44.696690256 +0000 UTC m=+148.394095035" watchObservedRunningTime="2025-11-23 06:52:44.729046084 +0000 UTC m=+148.426450863" Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.808645 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.808753 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.308734149 +0000 UTC m=+149.006138928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.809174 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.809452 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.309444767 +0000 UTC m=+149.006849546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.910497 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.910705 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.410676433 +0000 UTC m=+149.108081212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:44 crc kubenswrapper[5028]: I1123 06:52:44.910850 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:44 crc kubenswrapper[5028]: E1123 06:52:44.911223 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.411214786 +0000 UTC m=+149.108619565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.012743 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.012907 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.512873633 +0000 UTC m=+149.210278412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.013433 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.013482 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.013537 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.014180 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.514154285 +0000 UTC m=+149.211559054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.014817 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.014889 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.021052 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.021227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.029191 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.039058 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.083046 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.090293 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.116275 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.116776 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.616725834 +0000 UTC m=+149.314130633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.170120 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.217835 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.218795 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.71878065 +0000 UTC m=+149.416185429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.250895 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:45 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:45 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:45 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.250980 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.319097 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.320114 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.820068368 +0000 UTC m=+149.517473147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.424711 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.425035 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:45.925020856 +0000 UTC m=+149.622425635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.502600 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" event={"ID":"2220b96b-60ef-46c9-850e-4e7a38727019","Type":"ContainerStarted","Data":"cac5e5cc03376533e5c530197300d3bc41d9cbef0ba01738136b28333d237661"} Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.503931 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.518186 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" event={"ID":"94d1e6cc-d93d-4c83-82f3-3e84551beace","Type":"ContainerStarted","Data":"43c8129392803a87b93514ce11fa90320afa69ad0a8ffaa9fb14ac601b6d6616"} Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.518551 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.522254 5028 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fzg2c container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.522310 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.532505 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.533045 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.033024759 +0000 UTC m=+149.730429538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.542366 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s8wh2" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.579586 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" podStartSLOduration=123.579560487 podStartE2EDuration="2m3.579560487s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:45.578655845 +0000 UTC m=+149.276060624" watchObservedRunningTime="2025-11-23 06:52:45.579560487 +0000 UTC m=+149.276965266" Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.636717 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.655547 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.15553002 +0000 UTC m=+149.852934799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.739137 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.739537 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.239510361 +0000 UTC m=+149.936915150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.841362 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.841721 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.341707861 +0000 UTC m=+150.039112640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:45 crc kubenswrapper[5028]: W1123 06:52:45.915939 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-cb47444993537036467cc63526905d9aeedac9bc992cf7d74d32ec5d6a639d8c WatchSource:0}: Error finding container cb47444993537036467cc63526905d9aeedac9bc992cf7d74d32ec5d6a639d8c: Status 404 returned error can't find the container with id cb47444993537036467cc63526905d9aeedac9bc992cf7d74d32ec5d6a639d8c Nov 23 06:52:45 crc kubenswrapper[5028]: I1123 06:52:45.950421 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:45 crc kubenswrapper[5028]: E1123 06:52:45.950762 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.45074799 +0000 UTC m=+150.148152769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: W1123 06:52:46.025080 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-d74ba8caea2c0f556bef172fa5fcf93a84db4f9acf0408f19d7739050402da9d WatchSource:0}: Error finding container d74ba8caea2c0f556bef172fa5fcf93a84db4f9acf0408f19d7739050402da9d: Status 404 returned error can't find the container with id d74ba8caea2c0f556bef172fa5fcf93a84db4f9acf0408f19d7739050402da9d Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.053793 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.054524 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.554499468 +0000 UTC m=+150.251904247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.155499 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.155895 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.655879728 +0000 UTC m=+150.353284507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.251559 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:46 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:46 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:46 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.251641 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.257438 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.257762 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.75774853 +0000 UTC m=+150.455153309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.359627 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.359829 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.859794027 +0000 UTC m=+150.557198806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.359932 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.360251 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.860242358 +0000 UTC m=+150.557647137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.460986 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.461184 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.961151546 +0000 UTC m=+150.658556325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.461358 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.461744 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:46.96172718 +0000 UTC m=+150.659131959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.524658 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e50941ef828f06413fbe9f5f8c00047a719227c567e052029625880337794428"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.524715 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cb47444993537036467cc63526905d9aeedac9bc992cf7d74d32ec5d6a639d8c"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.527058 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" event={"ID":"94d1e6cc-d93d-4c83-82f3-3e84551beace","Type":"ContainerStarted","Data":"0368ff58038f525d508bb70fd5a6b106de53fff75a9d4b7e16d5e1cc3a18b89f"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.528286 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e267fac50a34dd99126de5f3ae90da79671be96bd07f456ef74fcdae60d25964"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.528311 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d74ba8caea2c0f556bef172fa5fcf93a84db4f9acf0408f19d7739050402da9d"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.528630 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.530030 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"92e99c6c7f0688ef0bb1406006950bdbb16b76b2f446d62225e20eae07e38ace"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.530055 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"69c05f8c838c8cdfd18f6b9f6faf7fe4c7caf04cd3a03157f63181dd0450efc1"} Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.562725 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.562977 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.062922796 +0000 UTC m=+150.760327575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.563069 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.563415 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.063399187 +0000 UTC m=+150.760803966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.668054 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.669704 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.169686798 +0000 UTC m=+150.867091577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.771024 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.771457 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.271437617 +0000 UTC m=+150.968842396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.872388 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.872654 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.372613092 +0000 UTC m=+151.070017871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:46 crc kubenswrapper[5028]: I1123 06:52:46.974925 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:46 crc kubenswrapper[5028]: E1123 06:52:46.975477 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.475455308 +0000 UTC m=+151.172860087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.017235 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m42bg"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.018545 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.024344 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.046361 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m42bg"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.085967 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.086330 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.586312272 +0000 UTC m=+151.283717051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.086423 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-utilities\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.086473 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.086493 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz62m\" (UniqueName: \"kubernetes.io/projected/266642af-8ffc-454c-b11e-d81483412956-kube-api-access-pz62m\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.086511 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-catalog-content\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.086826 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.586818824 +0000 UTC m=+151.284223603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.188362 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.188580 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-utilities\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.188650 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz62m\" (UniqueName: \"kubernetes.io/projected/266642af-8ffc-454c-b11e-d81483412956-kube-api-access-pz62m\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.188677 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-catalog-content\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.189154 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-catalog-content\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.189242 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.689225839 +0000 UTC m=+151.386630618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.189404 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-utilities\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.205559 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ljts"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.207808 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.214709 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.244226 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ljts"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.270192 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:47 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:47 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:47 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.270430 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.270320 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz62m\" (UniqueName: \"kubernetes.io/projected/266642af-8ffc-454c-b11e-d81483412956-kube-api-access-pz62m\") pod \"certified-operators-m42bg\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.290204 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-catalog-content\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.290254 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-utilities\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.290281 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.290331 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ptjs\" (UniqueName: \"kubernetes.io/projected/16fab255-3f9b-4e01-af32-4fa824a86807-kube-api-access-4ptjs\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.290630 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.790618269 +0000 UTC m=+151.488023048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.340388 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.384103 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzngh"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.385028 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.392448 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.392682 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ptjs\" (UniqueName: \"kubernetes.io/projected/16fab255-3f9b-4e01-af32-4fa824a86807-kube-api-access-4ptjs\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.392732 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-catalog-content\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.392779 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-utilities\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.393308 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-utilities\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.393778 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.893757853 +0000 UTC m=+151.591162632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.394263 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-catalog-content\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.410099 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzngh"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.425265 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ptjs\" (UniqueName: \"kubernetes.io/projected/16fab255-3f9b-4e01-af32-4fa824a86807-kube-api-access-4ptjs\") pod \"community-operators-8ljts\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.495152 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-catalog-content\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.495194 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl49r\" (UniqueName: \"kubernetes.io/projected/8497e823-0924-49d2-a452-d0cb03f2926a-kube-api-access-cl49r\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.495229 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.495252 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-utilities\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.495535 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:47.995522092 +0000 UTC m=+151.692926871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.547427 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.586446 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-knhhl"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.587452 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.596577 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.596845 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-utilities\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.596985 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-catalog-content\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.597021 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl49r\" (UniqueName: \"kubernetes.io/projected/8497e823-0924-49d2-a452-d0cb03f2926a-kube-api-access-cl49r\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.597686 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.097654181 +0000 UTC m=+151.795058960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.597856 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-utilities\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.600331 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-catalog-content\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.610466 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" event={"ID":"94d1e6cc-d93d-4c83-82f3-3e84551beace","Type":"ContainerStarted","Data":"4f1b02b8da0351d5b8ebc94a65b745f482db93844c4255e9a44ac2e0a1656711"} Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.657853 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-knhhl"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.665319 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl49r\" (UniqueName: \"kubernetes.io/projected/8497e823-0924-49d2-a452-d0cb03f2926a-kube-api-access-cl49r\") pod \"certified-operators-mzngh\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.701508 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvjv\" (UniqueName: \"kubernetes.io/projected/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-kube-api-access-wtvjv\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.702025 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.702060 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-utilities\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.703982 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-catalog-content\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.705337 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.205320446 +0000 UTC m=+151.902725225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.711499 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.805369 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.807380 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.807660 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.807890 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvjv\" (UniqueName: \"kubernetes.io/projected/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-kube-api-access-wtvjv\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.807970 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-utilities\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.808032 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.308000877 +0000 UTC m=+152.005405656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.808080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-catalog-content\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.808379 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-utilities\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.808425 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-catalog-content\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.809506 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.810619 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.810859 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.835554 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvjv\" (UniqueName: \"kubernetes.io/projected/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-kube-api-access-wtvjv\") pod \"community-operators-knhhl\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.892171 5028 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.920082 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.920170 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:47 crc kubenswrapper[5028]: I1123 06:52:47.920208 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:47 crc kubenswrapper[5028]: E1123 06:52:47.920656 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.420636824 +0000 UTC m=+152.118041613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.001403 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.022101 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.022369 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.022407 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.022478 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: E1123 06:52:48.022550 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.522533677 +0000 UTC m=+152.219938456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.065516 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.090864 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.122765 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m42bg"] Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.124511 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:48 crc kubenswrapper[5028]: E1123 06:52:48.124991 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.624969773 +0000 UTC m=+152.322374562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.125667 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.138961 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.139680 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.147193 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.147641 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.147846 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.192848 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.195354 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.195665 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ljts"] Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.203156 5028 patch_prober.go:28] interesting pod/downloads-7954f5f757-m4qn7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.203229 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m4qn7" podUID="e3c4cf13-f6af-4121-9feb-653a6abd396a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.225163 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.225438 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09347c11-c01c-4e98-9e13-9fdf9ed45044-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.225510 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09347c11-c01c-4e98-9e13-9fdf9ed45044-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: E1123 06:52:48.226444 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.726427434 +0000 UTC m=+152.423832203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.239830 5028 patch_prober.go:28] interesting pod/downloads-7954f5f757-m4qn7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.239891 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m4qn7" podUID="e3c4cf13-f6af-4121-9feb-653a6abd396a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.247253 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.251845 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:48 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:48 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:48 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.251914 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.329558 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09347c11-c01c-4e98-9e13-9fdf9ed45044-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.329807 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.329908 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09347c11-c01c-4e98-9e13-9fdf9ed45044-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: E1123 06:52:48.332763 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.832739546 +0000 UTC m=+152.530144315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.334920 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09347c11-c01c-4e98-9e13-9fdf9ed45044-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.366981 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzngh"] Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.391151 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09347c11-c01c-4e98-9e13-9fdf9ed45044-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.428629 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.429755 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.432049 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:48 crc kubenswrapper[5028]: E1123 06:52:48.432471 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-23 06:52:48.932452745 +0000 UTC m=+152.629857524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.437635 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.437683 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.450732 5028 patch_prober.go:28] interesting pod/console-f9d7485db-pppdd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.450819 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pppdd" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.512977 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.513009 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.513664 5028 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-23T06:52:47.892209133Z","Handler":null,"Name":""} Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.514466 5028 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7rpm6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]log ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]etcd ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/generic-apiserver-start-informers ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/max-in-flight-filter ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 23 06:52:48 crc kubenswrapper[5028]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 23 06:52:48 crc kubenswrapper[5028]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/project.openshift.io-projectcache ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/openshift.io-startinformers ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 23 06:52:48 crc kubenswrapper[5028]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 23 06:52:48 crc kubenswrapper[5028]: livez check failed Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.514509 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" podUID="64fac0c1-4e23-48c0-a162-f77370e3497e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.533576 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:48 crc kubenswrapper[5028]: E1123 06:52:48.538886 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-23 06:52:49.038850818 +0000 UTC m=+152.736255597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8mmks" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.541753 5028 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.541786 5028 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.556178 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hbwhl" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.556964 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8cd2d" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.652822 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.710076 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.711910 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.713391 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.725011 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ljts" event={"ID":"16fab255-3f9b-4e01-af32-4fa824a86807","Type":"ContainerStarted","Data":"da32b0a2dbb6cc02a3f06c2efddfd605d8bfe3565f2ad11b4e8cd30349e2bcbe"} Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.733600 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" event={"ID":"94d1e6cc-d93d-4c83-82f3-3e84551beace","Type":"ContainerStarted","Data":"c6ef6a922a2dd18d175405d9b6c7ce512d41d98eeb6d2a30eb2075f7335904b4"} Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.741668 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.742150 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerStarted","Data":"cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4"} Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.742243 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerStarted","Data":"71ff5c287cc3652cc691396e027abaf7e4cd763d36edd6bec6998487ace1e45c"} Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.743339 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.754013 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-682pg" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.757084 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.769879 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzngh" event={"ID":"8497e823-0924-49d2-a452-d0cb03f2926a","Type":"ContainerStarted","Data":"b8a4a961127bf60313f94751fb340782a73a267b433bea2826f498ae4e9afe04"} Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.771967 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.772015 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.775148 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.826103 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jsktx" podStartSLOduration=13.826087051 podStartE2EDuration="13.826087051s" podCreationTimestamp="2025-11-23 06:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:48.825293692 +0000 UTC m=+152.522698461" watchObservedRunningTime="2025-11-23 06:52:48.826087051 +0000 UTC m=+152.523491830" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.979919 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8mmks\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.981821 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-knhhl"] Nov 23 06:52:48 crc kubenswrapper[5028]: I1123 06:52:48.996202 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.078617 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.122878 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.153883 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4fc6h"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.165794 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fc6h"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.166038 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.170511 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.250552 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:49 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:49 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:49 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.250610 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.260443 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.273718 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-catalog-content\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.273798 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-utilities\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.273838 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9t4d\" (UniqueName: \"kubernetes.io/projected/ed371fd1-a120-4a2d-8868-03e1987260ef-kube-api-access-f9t4d\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.376981 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-utilities\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.377063 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9t4d\" (UniqueName: \"kubernetes.io/projected/ed371fd1-a120-4a2d-8868-03e1987260ef-kube-api-access-f9t4d\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.377114 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-catalog-content\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.377586 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-catalog-content\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.377874 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-utilities\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.417762 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9t4d\" (UniqueName: \"kubernetes.io/projected/ed371fd1-a120-4a2d-8868-03e1987260ef-kube-api-access-f9t4d\") pod \"redhat-marketplace-4fc6h\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.516285 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.559076 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6tl2h"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.570231 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.600887 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tl2h"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.683726 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-utilities\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.684268 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-catalog-content\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.684313 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdz2r\" (UniqueName: \"kubernetes.io/projected/c4fcb369-0439-4325-8e71-aabe07feff87-kube-api-access-qdz2r\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.786401 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-utilities\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.786451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-catalog-content\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.786487 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdz2r\" (UniqueName: \"kubernetes.io/projected/c4fcb369-0439-4325-8e71-aabe07feff87-kube-api-access-qdz2r\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.787605 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-utilities\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.787893 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-catalog-content\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.792912 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09347c11-c01c-4e98-9e13-9fdf9ed45044","Type":"ContainerStarted","Data":"e6720ee6794052d3733f9405d1fd7b9bb74eb553a12f7638e1996a3ed55b61fe"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.800360 5028 generic.go:334] "Generic (PLEG): container finished" podID="266642af-8ffc-454c-b11e-d81483412956" containerID="cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4" exitCode=0 Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.800450 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerDied","Data":"cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.824985 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdz2r\" (UniqueName: \"kubernetes.io/projected/c4fcb369-0439-4325-8e71-aabe07feff87-kube-api-access-qdz2r\") pod \"redhat-marketplace-6tl2h\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.827297 5028 generic.go:334] "Generic (PLEG): container finished" podID="8497e823-0924-49d2-a452-d0cb03f2926a" containerID="468b278623ec277a109e05e1d11c4a211487e59e8206e60f552e9a4cc149c2ea" exitCode=0 Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.828118 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzngh" event={"ID":"8497e823-0924-49d2-a452-d0cb03f2926a","Type":"ContainerDied","Data":"468b278623ec277a109e05e1d11c4a211487e59e8206e60f552e9a4cc149c2ea"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.846211 5028 generic.go:334] "Generic (PLEG): container finished" podID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerID="af7576ba595f439bd67d827547d9f90c2b9a5cca8b8f98ab0eb4a2aa9610e036" exitCode=0 Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.846306 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knhhl" event={"ID":"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd","Type":"ContainerDied","Data":"af7576ba595f439bd67d827547d9f90c2b9a5cca8b8f98ab0eb4a2aa9610e036"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.846344 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knhhl" event={"ID":"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd","Type":"ContainerStarted","Data":"7eba66e5eba7755acc69e4330e49e499a71023b24d4a0e823098836058713698"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.858601 5028 generic.go:334] "Generic (PLEG): container finished" podID="16fab255-3f9b-4e01-af32-4fa824a86807" containerID="5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb" exitCode=0 Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.858690 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ljts" event={"ID":"16fab255-3f9b-4e01-af32-4fa824a86807","Type":"ContainerDied","Data":"5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.861584 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bde7d8f-55c3-419c-8d1a-e4efb25d5640","Type":"ContainerStarted","Data":"9e3753aa320665304af23b79011acb2d330d3a549ee8325c89d1d9f568d404a8"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.861627 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bde7d8f-55c3-419c-8d1a-e4efb25d5640","Type":"ContainerStarted","Data":"14d18adc3410c1da1cd44688f8d9dfa59552706112a837e254571647258b9c6a"} Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.872206 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mmks"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.877764 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-z6mq2" Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.898537 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fc6h"] Nov 23 06:52:49 crc kubenswrapper[5028]: I1123 06:52:49.934463 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.160837 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.160810574 podStartE2EDuration="3.160810574s" podCreationTimestamp="2025-11-23 06:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:49.934113674 +0000 UTC m=+153.631518453" watchObservedRunningTime="2025-11-23 06:52:50.160810574 +0000 UTC m=+153.858215353" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.162483 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cp5nv"] Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.164863 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.171539 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.175011 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cp5nv"] Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.200978 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-catalog-content\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.201698 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn5f\" (UniqueName: \"kubernetes.io/projected/8bcb40b3-8fab-431d-a9a7-f85a23090456-kube-api-access-btn5f\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.201788 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-utilities\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.251426 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:50 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:50 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:50 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.251486 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.302770 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-catalog-content\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.302841 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btn5f\" (UniqueName: \"kubernetes.io/projected/8bcb40b3-8fab-431d-a9a7-f85a23090456-kube-api-access-btn5f\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.302876 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-utilities\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.303337 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-utilities\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.303551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-catalog-content\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.346243 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btn5f\" (UniqueName: \"kubernetes.io/projected/8bcb40b3-8fab-431d-a9a7-f85a23090456-kube-api-access-btn5f\") pod \"redhat-operators-cp5nv\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.488737 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tl2h"] Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.511769 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.553104 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2z7w"] Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.554419 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.573438 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2z7w"] Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.685563 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rpcj5" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.710291 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-catalog-content\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.710367 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-utilities\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.710422 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm6f2\" (UniqueName: \"kubernetes.io/projected/c8c63702-05c7-4e95-b682-cd9592ad6caa-kube-api-access-wm6f2\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.817069 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-utilities\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.817179 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm6f2\" (UniqueName: \"kubernetes.io/projected/c8c63702-05c7-4e95-b682-cd9592ad6caa-kube-api-access-wm6f2\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.817282 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-catalog-content\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.817939 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-catalog-content\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.819343 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-utilities\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.840481 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cp5nv"] Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.844026 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm6f2\" (UniqueName: \"kubernetes.io/projected/c8c63702-05c7-4e95-b682-cd9592ad6caa-kube-api-access-wm6f2\") pod \"redhat-operators-g2z7w\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.887363 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" event={"ID":"3ad0fd40-348f-46f6-87f8-001fc9918495","Type":"ContainerStarted","Data":"cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.887412 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" event={"ID":"3ad0fd40-348f-46f6-87f8-001fc9918495","Type":"ContainerStarted","Data":"aca0d3a0d4b094fde3d74d330194d27787bef15fb6fde59bc64b59229e61447a"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.889619 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.902756 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tl2h" event={"ID":"c4fcb369-0439-4325-8e71-aabe07feff87","Type":"ContainerStarted","Data":"80ac5b3d8871ed1b7012566ce74bb3154f03d2ad11414d92c1b52145ed0374fe"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.904935 5028 generic.go:334] "Generic (PLEG): container finished" podID="9bde7d8f-55c3-419c-8d1a-e4efb25d5640" containerID="9e3753aa320665304af23b79011acb2d330d3a549ee8325c89d1d9f568d404a8" exitCode=0 Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.905351 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bde7d8f-55c3-419c-8d1a-e4efb25d5640","Type":"ContainerDied","Data":"9e3753aa320665304af23b79011acb2d330d3a549ee8325c89d1d9f568d404a8"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.919433 5028 generic.go:334] "Generic (PLEG): container finished" podID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerID="7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b" exitCode=0 Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.919537 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fc6h" event={"ID":"ed371fd1-a120-4a2d-8868-03e1987260ef","Type":"ContainerDied","Data":"7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.919567 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fc6h" event={"ID":"ed371fd1-a120-4a2d-8868-03e1987260ef","Type":"ContainerStarted","Data":"d624348e93d0f4a121b2ff80fc541e4e52effa3158bfce982baf4b90e2e88c6f"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.926443 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" podStartSLOduration=128.926425634 podStartE2EDuration="2m8.926425634s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:50.926332851 +0000 UTC m=+154.623737630" watchObservedRunningTime="2025-11-23 06:52:50.926425634 +0000 UTC m=+154.623830413" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.945110 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09347c11-c01c-4e98-9e13-9fdf9ed45044","Type":"ContainerStarted","Data":"fc9f08609ce1c81c56b5302a69f34929bb2f0e3d5315eb58de1bd7f345bcf6bc"} Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.996572 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.996549593 podStartE2EDuration="2.996549593s" podCreationTimestamp="2025-11-23 06:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:52:50.990475673 +0000 UTC m=+154.687880442" watchObservedRunningTime="2025-11-23 06:52:50.996549593 +0000 UTC m=+154.693954372" Nov 23 06:52:50 crc kubenswrapper[5028]: I1123 06:52:50.999238 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.253408 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:51 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:51 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:51 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.254028 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.474488 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2z7w"] Nov 23 06:52:51 crc kubenswrapper[5028]: W1123 06:52:51.508909 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c63702_05c7_4e95_b682_cd9592ad6caa.slice/crio-ce6dedc6e922667550e31dfec6aace28a7ad31a3b3336a1569f63b669a840231 WatchSource:0}: Error finding container ce6dedc6e922667550e31dfec6aace28a7ad31a3b3336a1569f63b669a840231: Status 404 returned error can't find the container with id ce6dedc6e922667550e31dfec6aace28a7ad31a3b3336a1569f63b669a840231 Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.964937 5028 generic.go:334] "Generic (PLEG): container finished" podID="09347c11-c01c-4e98-9e13-9fdf9ed45044" containerID="fc9f08609ce1c81c56b5302a69f34929bb2f0e3d5315eb58de1bd7f345bcf6bc" exitCode=0 Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.965410 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09347c11-c01c-4e98-9e13-9fdf9ed45044","Type":"ContainerDied","Data":"fc9f08609ce1c81c56b5302a69f34929bb2f0e3d5315eb58de1bd7f345bcf6bc"} Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.985303 5028 generic.go:334] "Generic (PLEG): container finished" podID="c4fcb369-0439-4325-8e71-aabe07feff87" containerID="47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe" exitCode=0 Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.985466 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tl2h" event={"ID":"c4fcb369-0439-4325-8e71-aabe07feff87","Type":"ContainerDied","Data":"47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe"} Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.992036 5028 generic.go:334] "Generic (PLEG): container finished" podID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerID="e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a" exitCode=0 Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.992105 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2z7w" event={"ID":"c8c63702-05c7-4e95-b682-cd9592ad6caa","Type":"ContainerDied","Data":"e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a"} Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.992156 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2z7w" event={"ID":"c8c63702-05c7-4e95-b682-cd9592ad6caa","Type":"ContainerStarted","Data":"ce6dedc6e922667550e31dfec6aace28a7ad31a3b3336a1569f63b669a840231"} Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.996748 5028 generic.go:334] "Generic (PLEG): container finished" podID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerID="2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462" exitCode=0 Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.997347 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5nv" event={"ID":"8bcb40b3-8fab-431d-a9a7-f85a23090456","Type":"ContainerDied","Data":"2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462"} Nov 23 06:52:51 crc kubenswrapper[5028]: I1123 06:52:51.997388 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5nv" event={"ID":"8bcb40b3-8fab-431d-a9a7-f85a23090456","Type":"ContainerStarted","Data":"6bc309c2f6e923623bf929d0147c0e560c68728a6b55870578c14584b5cf45e3"} Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.248308 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:52 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:52 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:52 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.248723 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.369280 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.460502 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kubelet-dir\") pod \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.460600 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kube-api-access\") pod \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\" (UID: \"9bde7d8f-55c3-419c-8d1a-e4efb25d5640\") " Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.461221 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9bde7d8f-55c3-419c-8d1a-e4efb25d5640" (UID: "9bde7d8f-55c3-419c-8d1a-e4efb25d5640"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.466574 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9bde7d8f-55c3-419c-8d1a-e4efb25d5640" (UID: "9bde7d8f-55c3-419c-8d1a-e4efb25d5640"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.562393 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:52 crc kubenswrapper[5028]: I1123 06:52:52.562430 5028 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bde7d8f-55c3-419c-8d1a-e4efb25d5640-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.049160 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.049747 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bde7d8f-55c3-419c-8d1a-e4efb25d5640","Type":"ContainerDied","Data":"14d18adc3410c1da1cd44688f8d9dfa59552706112a837e254571647258b9c6a"} Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.049789 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14d18adc3410c1da1cd44688f8d9dfa59552706112a837e254571647258b9c6a" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.247650 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:53 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:53 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:53 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.248776 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.280390 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.372459 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09347c11-c01c-4e98-9e13-9fdf9ed45044-kubelet-dir\") pod \"09347c11-c01c-4e98-9e13-9fdf9ed45044\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.373245 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09347c11-c01c-4e98-9e13-9fdf9ed45044-kube-api-access\") pod \"09347c11-c01c-4e98-9e13-9fdf9ed45044\" (UID: \"09347c11-c01c-4e98-9e13-9fdf9ed45044\") " Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.372908 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09347c11-c01c-4e98-9e13-9fdf9ed45044-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "09347c11-c01c-4e98-9e13-9fdf9ed45044" (UID: "09347c11-c01c-4e98-9e13-9fdf9ed45044"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.378424 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09347c11-c01c-4e98-9e13-9fdf9ed45044-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "09347c11-c01c-4e98-9e13-9fdf9ed45044" (UID: "09347c11-c01c-4e98-9e13-9fdf9ed45044"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.430581 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.436428 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7rpm6" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.478862 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09347c11-c01c-4e98-9e13-9fdf9ed45044-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.478896 5028 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09347c11-c01c-4e98-9e13-9fdf9ed45044-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:52:53 crc kubenswrapper[5028]: I1123 06:52:53.797896 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nv45l" Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.079014 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09347c11-c01c-4e98-9e13-9fdf9ed45044","Type":"ContainerDied","Data":"e6720ee6794052d3733f9405d1fd7b9bb74eb553a12f7638e1996a3ed55b61fe"} Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.079078 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6720ee6794052d3733f9405d1fd7b9bb74eb553a12f7638e1996a3ed55b61fe" Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.079107 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.094020 5028 generic.go:334] "Generic (PLEG): container finished" podID="fab3473d-0543-4160-8ad4-f262ec89e82b" containerID="26b150f57bfa4845a597b6145a89ff3ae12a7c21fc452e0f64daba97c9f81293" exitCode=0 Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.094073 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" event={"ID":"fab3473d-0543-4160-8ad4-f262ec89e82b","Type":"ContainerDied","Data":"26b150f57bfa4845a597b6145a89ff3ae12a7c21fc452e0f64daba97c9f81293"} Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.247397 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:54 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:54 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:54 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:54 crc kubenswrapper[5028]: I1123 06:52:54.247456 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:55 crc kubenswrapper[5028]: I1123 06:52:55.248009 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:55 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:55 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:55 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:55 crc kubenswrapper[5028]: I1123 06:52:55.248397 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:56 crc kubenswrapper[5028]: I1123 06:52:56.246360 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:56 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:56 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:56 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:56 crc kubenswrapper[5028]: I1123 06:52:56.247070 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:57 crc kubenswrapper[5028]: I1123 06:52:57.250235 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:57 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:57 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:57 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:57 crc kubenswrapper[5028]: I1123 06:52:57.250294 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:57 crc kubenswrapper[5028]: I1123 06:52:57.458031 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.203172 5028 patch_prober.go:28] interesting pod/downloads-7954f5f757-m4qn7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.203375 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m4qn7" podUID="e3c4cf13-f6af-4121-9feb-653a6abd396a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.203687 5028 patch_prober.go:28] interesting pod/downloads-7954f5f757-m4qn7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.203754 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m4qn7" podUID="e3c4cf13-f6af-4121-9feb-653a6abd396a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.246970 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:58 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:58 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:58 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.247073 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.434866 5028 patch_prober.go:28] interesting pod/console-f9d7485db-pppdd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 23 06:52:58 crc kubenswrapper[5028]: I1123 06:52:58.434996 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pppdd" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 23 06:52:59 crc kubenswrapper[5028]: I1123 06:52:59.247623 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:52:59 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:52:59 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:52:59 crc kubenswrapper[5028]: healthz check failed Nov 23 06:52:59 crc kubenswrapper[5028]: I1123 06:52:59.247681 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:00 crc kubenswrapper[5028]: I1123 06:53:00.247263 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:00 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:53:00 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:00 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:00 crc kubenswrapper[5028]: I1123 06:53:00.247606 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:00 crc kubenswrapper[5028]: I1123 06:53:00.945990 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:53:00 crc kubenswrapper[5028]: I1123 06:53:00.946038 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.246687 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:01 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:53:01 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:01 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.246744 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.408873 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.509864 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz86z\" (UniqueName: \"kubernetes.io/projected/fab3473d-0543-4160-8ad4-f262ec89e82b-kube-api-access-qz86z\") pod \"fab3473d-0543-4160-8ad4-f262ec89e82b\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.510060 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab3473d-0543-4160-8ad4-f262ec89e82b-config-volume\") pod \"fab3473d-0543-4160-8ad4-f262ec89e82b\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.510890 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab3473d-0543-4160-8ad4-f262ec89e82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "fab3473d-0543-4160-8ad4-f262ec89e82b" (UID: "fab3473d-0543-4160-8ad4-f262ec89e82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.511059 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab3473d-0543-4160-8ad4-f262ec89e82b-secret-volume\") pod \"fab3473d-0543-4160-8ad4-f262ec89e82b\" (UID: \"fab3473d-0543-4160-8ad4-f262ec89e82b\") " Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.511756 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fab3473d-0543-4160-8ad4-f262ec89e82b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.516478 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab3473d-0543-4160-8ad4-f262ec89e82b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fab3473d-0543-4160-8ad4-f262ec89e82b" (UID: "fab3473d-0543-4160-8ad4-f262ec89e82b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.516789 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab3473d-0543-4160-8ad4-f262ec89e82b-kube-api-access-qz86z" (OuterVolumeSpecName: "kube-api-access-qz86z") pod "fab3473d-0543-4160-8ad4-f262ec89e82b" (UID: "fab3473d-0543-4160-8ad4-f262ec89e82b"). InnerVolumeSpecName "kube-api-access-qz86z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.613272 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fab3473d-0543-4160-8ad4-f262ec89e82b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:01 crc kubenswrapper[5028]: I1123 06:53:01.613307 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz86z\" (UniqueName: \"kubernetes.io/projected/fab3473d-0543-4160-8ad4-f262ec89e82b-kube-api-access-qz86z\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:02 crc kubenswrapper[5028]: I1123 06:53:02.190133 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" event={"ID":"fab3473d-0543-4160-8ad4-f262ec89e82b","Type":"ContainerDied","Data":"23765d2c78605914154aecc0e02a35ba8844b8afdefaee9d229655383f11bf80"} Nov 23 06:53:02 crc kubenswrapper[5028]: I1123 06:53:02.190448 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23765d2c78605914154aecc0e02a35ba8844b8afdefaee9d229655383f11bf80" Nov 23 06:53:02 crc kubenswrapper[5028]: I1123 06:53:02.190167 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh" Nov 23 06:53:02 crc kubenswrapper[5028]: I1123 06:53:02.247564 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:02 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:53:02 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:02 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:02 crc kubenswrapper[5028]: I1123 06:53:02.247640 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:03 crc kubenswrapper[5028]: I1123 06:53:03.246623 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:03 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:53:03 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:03 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:03 crc kubenswrapper[5028]: I1123 06:53:03.246698 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:04 crc kubenswrapper[5028]: I1123 06:53:04.246139 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:04 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:53:04 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:04 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:04 crc kubenswrapper[5028]: I1123 06:53:04.246206 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:04 crc kubenswrapper[5028]: I1123 06:53:04.549940 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:53:04 crc kubenswrapper[5028]: I1123 06:53:04.554385 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bfed01d0-dd8f-478d-991f-4a9242b1c2be-metrics-certs\") pod \"network-metrics-daemon-5ft9z\" (UID: \"bfed01d0-dd8f-478d-991f-4a9242b1c2be\") " pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:53:04 crc kubenswrapper[5028]: I1123 06:53:04.574381 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5ft9z" Nov 23 06:53:05 crc kubenswrapper[5028]: I1123 06:53:05.247050 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:05 crc kubenswrapper[5028]: [-]has-synced failed: reason withheld Nov 23 06:53:05 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:05 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:05 crc kubenswrapper[5028]: I1123 06:53:05.247131 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:06 crc kubenswrapper[5028]: I1123 06:53:06.246552 5028 patch_prober.go:28] interesting pod/router-default-5444994796-vm29q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 23 06:53:06 crc kubenswrapper[5028]: [+]has-synced ok Nov 23 06:53:06 crc kubenswrapper[5028]: [+]process-running ok Nov 23 06:53:06 crc kubenswrapper[5028]: healthz check failed Nov 23 06:53:06 crc kubenswrapper[5028]: I1123 06:53:06.246909 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vm29q" podUID="3da08adf-859a-4df3-84d6-842f8652b8c5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 23 06:53:07 crc kubenswrapper[5028]: I1123 06:53:07.248393 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:53:07 crc kubenswrapper[5028]: I1123 06:53:07.251810 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vm29q" Nov 23 06:53:08 crc kubenswrapper[5028]: I1123 06:53:08.222858 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-m4qn7" Nov 23 06:53:08 crc kubenswrapper[5028]: I1123 06:53:08.435247 5028 patch_prober.go:28] interesting pod/console-f9d7485db-pppdd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 23 06:53:08 crc kubenswrapper[5028]: I1123 06:53:08.435300 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-pppdd" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 23 06:53:09 crc kubenswrapper[5028]: I1123 06:53:09.129665 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:53:18 crc kubenswrapper[5028]: I1123 06:53:18.438828 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:53:18 crc kubenswrapper[5028]: I1123 06:53:18.444134 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 06:53:18 crc kubenswrapper[5028]: I1123 06:53:18.746903 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bt4rc" Nov 23 06:53:20 crc kubenswrapper[5028]: E1123 06:53:20.877374 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 23 06:53:20 crc kubenswrapper[5028]: E1123 06:53:20.877520 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btn5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cp5nv_openshift-marketplace(8bcb40b3-8fab-431d-a9a7-f85a23090456): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:53:20 crc kubenswrapper[5028]: E1123 06:53:20.878774 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cp5nv" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" Nov 23 06:53:25 crc kubenswrapper[5028]: I1123 06:53:25.792224 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 23 06:53:26 crc kubenswrapper[5028]: E1123 06:53:26.340320 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 06:53:26 crc kubenswrapper[5028]: E1123 06:53:26.340450 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl49r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mzngh_openshift-marketplace(8497e823-0924-49d2-a452-d0cb03f2926a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:53:26 crc kubenswrapper[5028]: E1123 06:53:26.341659 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mzngh" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" Nov 23 06:53:30 crc kubenswrapper[5028]: E1123 06:53:30.795975 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cp5nv" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" Nov 23 06:53:30 crc kubenswrapper[5028]: E1123 06:53:30.796007 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mzngh" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" Nov 23 06:53:30 crc kubenswrapper[5028]: I1123 06:53:30.946715 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:53:30 crc kubenswrapper[5028]: I1123 06:53:30.946793 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:53:31 crc kubenswrapper[5028]: E1123 06:53:31.744389 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 23 06:53:31 crc kubenswrapper[5028]: E1123 06:53:31.744539 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pz62m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m42bg_openshift-marketplace(266642af-8ffc-454c-b11e-d81483412956): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:53:31 crc kubenswrapper[5028]: E1123 06:53:31.745744 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m42bg" podUID="266642af-8ffc-454c-b11e-d81483412956" Nov 23 06:53:40 crc kubenswrapper[5028]: E1123 06:53:40.929511 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m42bg" podUID="266642af-8ffc-454c-b11e-d81483412956" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.292049 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.292225 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9t4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4fc6h_openshift-marketplace(ed371fd1-a120-4a2d-8868-03e1987260ef): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.293455 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4fc6h" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.414152 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4fc6h" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.427917 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.428068 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdz2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6tl2h_openshift-marketplace(c4fcb369-0439-4325-8e71-aabe07feff87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 23 06:53:42 crc kubenswrapper[5028]: E1123 06:53:42.429340 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6tl2h" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" Nov 23 06:53:42 crc kubenswrapper[5028]: I1123 06:53:42.824139 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5ft9z"] Nov 23 06:53:42 crc kubenswrapper[5028]: W1123 06:53:42.962589 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfed01d0_dd8f_478d_991f_4a9242b1c2be.slice/crio-35fdd9bbc76bbe5c0a31137c5e3d84e919886e7064338bc9ab14aea900dea2e4 WatchSource:0}: Error finding container 35fdd9bbc76bbe5c0a31137c5e3d84e919886e7064338bc9ab14aea900dea2e4: Status 404 returned error can't find the container with id 35fdd9bbc76bbe5c0a31137c5e3d84e919886e7064338bc9ab14aea900dea2e4 Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.395158 5028 generic.go:334] "Generic (PLEG): container finished" podID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerID="6737cba6cf8eaa4efc37ab80da8fa623dc90805cd38584fb331b62ce23eab2fa" exitCode=0 Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.395258 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knhhl" event={"ID":"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd","Type":"ContainerDied","Data":"6737cba6cf8eaa4efc37ab80da8fa623dc90805cd38584fb331b62ce23eab2fa"} Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.398930 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" event={"ID":"bfed01d0-dd8f-478d-991f-4a9242b1c2be","Type":"ContainerStarted","Data":"bee5a2424be338f3cb14c66ae54b784c2f409bd799a392c4e9f96987d45160c0"} Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.398977 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" event={"ID":"bfed01d0-dd8f-478d-991f-4a9242b1c2be","Type":"ContainerStarted","Data":"35fdd9bbc76bbe5c0a31137c5e3d84e919886e7064338bc9ab14aea900dea2e4"} Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.401178 5028 generic.go:334] "Generic (PLEG): container finished" podID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerID="80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5" exitCode=0 Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.401231 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2z7w" event={"ID":"c8c63702-05c7-4e95-b682-cd9592ad6caa","Type":"ContainerDied","Data":"80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5"} Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.404892 5028 generic.go:334] "Generic (PLEG): container finished" podID="16fab255-3f9b-4e01-af32-4fa824a86807" containerID="0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277" exitCode=0 Nov 23 06:53:43 crc kubenswrapper[5028]: I1123 06:53:43.405344 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ljts" event={"ID":"16fab255-3f9b-4e01-af32-4fa824a86807","Type":"ContainerDied","Data":"0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277"} Nov 23 06:53:43 crc kubenswrapper[5028]: E1123 06:53:43.406068 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6tl2h" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.412688 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ljts" event={"ID":"16fab255-3f9b-4e01-af32-4fa824a86807","Type":"ContainerStarted","Data":"b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c"} Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.415705 5028 generic.go:334] "Generic (PLEG): container finished" podID="8497e823-0924-49d2-a452-d0cb03f2926a" containerID="ccc48a5bbb47928928b211677d632b3cdfc5e220122906238a5a00de0ec2d437" exitCode=0 Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.415767 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzngh" event={"ID":"8497e823-0924-49d2-a452-d0cb03f2926a","Type":"ContainerDied","Data":"ccc48a5bbb47928928b211677d632b3cdfc5e220122906238a5a00de0ec2d437"} Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.418785 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knhhl" event={"ID":"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd","Type":"ContainerStarted","Data":"5f5bd799a92f6914196c89faee495b886bfbb0094595d3cb90e7126a3544952e"} Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.421559 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5ft9z" event={"ID":"bfed01d0-dd8f-478d-991f-4a9242b1c2be","Type":"ContainerStarted","Data":"7614567edeb0c4c19c2eb8eabae873f9bd402aae3a5ec69897cf25a345951e12"} Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.423965 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2z7w" event={"ID":"c8c63702-05c7-4e95-b682-cd9592ad6caa","Type":"ContainerStarted","Data":"4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780"} Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.432431 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ljts" podStartSLOduration=2.356022593 podStartE2EDuration="57.432415997s" podCreationTimestamp="2025-11-23 06:52:47 +0000 UTC" firstStartedPulling="2025-11-23 06:52:48.741830214 +0000 UTC m=+152.439234983" lastFinishedPulling="2025-11-23 06:53:43.818223608 +0000 UTC m=+207.515628387" observedRunningTime="2025-11-23 06:53:44.431901914 +0000 UTC m=+208.129306703" watchObservedRunningTime="2025-11-23 06:53:44.432415997 +0000 UTC m=+208.129820776" Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.453173 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2z7w" podStartSLOduration=2.564293708 podStartE2EDuration="54.45315582s" podCreationTimestamp="2025-11-23 06:52:50 +0000 UTC" firstStartedPulling="2025-11-23 06:52:51.994326336 +0000 UTC m=+155.691731115" lastFinishedPulling="2025-11-23 06:53:43.883188448 +0000 UTC m=+207.580593227" observedRunningTime="2025-11-23 06:53:44.450791272 +0000 UTC m=+208.148196061" watchObservedRunningTime="2025-11-23 06:53:44.45315582 +0000 UTC m=+208.150560619" Nov 23 06:53:44 crc kubenswrapper[5028]: I1123 06:53:44.486306 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-knhhl" podStartSLOduration=3.350214344 podStartE2EDuration="57.486290362s" podCreationTimestamp="2025-11-23 06:52:47 +0000 UTC" firstStartedPulling="2025-11-23 06:52:49.851593459 +0000 UTC m=+153.548998228" lastFinishedPulling="2025-11-23 06:53:43.987669467 +0000 UTC m=+207.685074246" observedRunningTime="2025-11-23 06:53:44.48542341 +0000 UTC m=+208.182828199" watchObservedRunningTime="2025-11-23 06:53:44.486290362 +0000 UTC m=+208.183695141" Nov 23 06:53:45 crc kubenswrapper[5028]: I1123 06:53:45.437999 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzngh" event={"ID":"8497e823-0924-49d2-a452-d0cb03f2926a","Type":"ContainerStarted","Data":"6a3099b9c736dd51f90de81b04f825aaa4b385b3b97da2dd6012a05adc4d2b77"} Nov 23 06:53:45 crc kubenswrapper[5028]: I1123 06:53:45.455010 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzngh" podStartSLOduration=3.445293172 podStartE2EDuration="58.454993283s" podCreationTimestamp="2025-11-23 06:52:47 +0000 UTC" firstStartedPulling="2025-11-23 06:52:49.830797917 +0000 UTC m=+153.528202686" lastFinishedPulling="2025-11-23 06:53:44.840498008 +0000 UTC m=+208.537902797" observedRunningTime="2025-11-23 06:53:45.453959697 +0000 UTC m=+209.151364476" watchObservedRunningTime="2025-11-23 06:53:45.454993283 +0000 UTC m=+209.152398062" Nov 23 06:53:45 crc kubenswrapper[5028]: I1123 06:53:45.455184 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5ft9z" podStartSLOduration=183.455177587 podStartE2EDuration="3m3.455177587s" podCreationTimestamp="2025-11-23 06:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:53:44.504579375 +0000 UTC m=+208.201984154" watchObservedRunningTime="2025-11-23 06:53:45.455177587 +0000 UTC m=+209.152582366" Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.450002 5028 generic.go:334] "Generic (PLEG): container finished" podID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerID="164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb" exitCode=0 Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.450085 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5nv" event={"ID":"8bcb40b3-8fab-431d-a9a7-f85a23090456","Type":"ContainerDied","Data":"164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb"} Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.548252 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.548301 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.712537 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.712587 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.794240 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:53:47 crc kubenswrapper[5028]: I1123 06:53:47.798095 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:53:48 crc kubenswrapper[5028]: I1123 06:53:48.002367 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:53:48 crc kubenswrapper[5028]: I1123 06:53:48.002807 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:53:48 crc kubenswrapper[5028]: I1123 06:53:48.056734 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:53:48 crc kubenswrapper[5028]: I1123 06:53:48.458518 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5nv" event={"ID":"8bcb40b3-8fab-431d-a9a7-f85a23090456","Type":"ContainerStarted","Data":"63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc"} Nov 23 06:53:48 crc kubenswrapper[5028]: I1123 06:53:48.489076 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cp5nv" podStartSLOduration=2.641302046 podStartE2EDuration="58.489055641s" podCreationTimestamp="2025-11-23 06:52:50 +0000 UTC" firstStartedPulling="2025-11-23 06:52:52.003365539 +0000 UTC m=+155.700770318" lastFinishedPulling="2025-11-23 06:53:47.851119134 +0000 UTC m=+211.548523913" observedRunningTime="2025-11-23 06:53:48.487670986 +0000 UTC m=+212.185075785" watchObservedRunningTime="2025-11-23 06:53:48.489055641 +0000 UTC m=+212.186460670" Nov 23 06:53:48 crc kubenswrapper[5028]: I1123 06:53:48.553903 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:53:50 crc kubenswrapper[5028]: I1123 06:53:50.512961 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:53:50 crc kubenswrapper[5028]: I1123 06:53:50.513260 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:53:51 crc kubenswrapper[5028]: I1123 06:53:51.000486 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:53:51 crc kubenswrapper[5028]: I1123 06:53:51.000739 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:53:51 crc kubenswrapper[5028]: I1123 06:53:51.041992 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:53:51 crc kubenswrapper[5028]: I1123 06:53:51.514876 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:53:51 crc kubenswrapper[5028]: I1123 06:53:51.550565 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cp5nv" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="registry-server" probeResult="failure" output=< Nov 23 06:53:51 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 06:53:51 crc kubenswrapper[5028]: > Nov 23 06:53:53 crc kubenswrapper[5028]: I1123 06:53:53.617267 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2z7w"] Nov 23 06:53:53 crc kubenswrapper[5028]: I1123 06:53:53.617799 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g2z7w" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="registry-server" containerID="cri-o://4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780" gracePeriod=2 Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.018643 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.087348 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-utilities\") pod \"c8c63702-05c7-4e95-b682-cd9592ad6caa\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.087527 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm6f2\" (UniqueName: \"kubernetes.io/projected/c8c63702-05c7-4e95-b682-cd9592ad6caa-kube-api-access-wm6f2\") pod \"c8c63702-05c7-4e95-b682-cd9592ad6caa\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.087627 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-catalog-content\") pod \"c8c63702-05c7-4e95-b682-cd9592ad6caa\" (UID: \"c8c63702-05c7-4e95-b682-cd9592ad6caa\") " Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.088720 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-utilities" (OuterVolumeSpecName: "utilities") pod "c8c63702-05c7-4e95-b682-cd9592ad6caa" (UID: "c8c63702-05c7-4e95-b682-cd9592ad6caa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.093015 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c63702-05c7-4e95-b682-cd9592ad6caa-kube-api-access-wm6f2" (OuterVolumeSpecName: "kube-api-access-wm6f2") pod "c8c63702-05c7-4e95-b682-cd9592ad6caa" (UID: "c8c63702-05c7-4e95-b682-cd9592ad6caa"). InnerVolumeSpecName "kube-api-access-wm6f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.189326 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.189362 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm6f2\" (UniqueName: \"kubernetes.io/projected/c8c63702-05c7-4e95-b682-cd9592ad6caa-kube-api-access-wm6f2\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.207385 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8c63702-05c7-4e95-b682-cd9592ad6caa" (UID: "c8c63702-05c7-4e95-b682-cd9592ad6caa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.290697 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8c63702-05c7-4e95-b682-cd9592ad6caa-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.501056 5028 generic.go:334] "Generic (PLEG): container finished" podID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerID="4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780" exitCode=0 Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.501109 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2z7w" event={"ID":"c8c63702-05c7-4e95-b682-cd9592ad6caa","Type":"ContainerDied","Data":"4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780"} Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.501137 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2z7w" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.501168 5028 scope.go:117] "RemoveContainer" containerID="4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.501151 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2z7w" event={"ID":"c8c63702-05c7-4e95-b682-cd9592ad6caa","Type":"ContainerDied","Data":"ce6dedc6e922667550e31dfec6aace28a7ad31a3b3336a1569f63b669a840231"} Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.505414 5028 generic.go:334] "Generic (PLEG): container finished" podID="266642af-8ffc-454c-b11e-d81483412956" containerID="9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0" exitCode=0 Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.505450 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerDied","Data":"9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0"} Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.517499 5028 scope.go:117] "RemoveContainer" containerID="80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.546081 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2z7w"] Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.547467 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g2z7w"] Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.551308 5028 scope.go:117] "RemoveContainer" containerID="e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.565430 5028 scope.go:117] "RemoveContainer" containerID="4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780" Nov 23 06:53:54 crc kubenswrapper[5028]: E1123 06:53:54.565789 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780\": container with ID starting with 4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780 not found: ID does not exist" containerID="4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.565826 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780"} err="failed to get container status \"4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780\": rpc error: code = NotFound desc = could not find container \"4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780\": container with ID starting with 4d11d21889773c0419fdb5e41d529a3c5d9827edf52935325d936ab83439e780 not found: ID does not exist" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.565876 5028 scope.go:117] "RemoveContainer" containerID="80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5" Nov 23 06:53:54 crc kubenswrapper[5028]: E1123 06:53:54.566256 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5\": container with ID starting with 80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5 not found: ID does not exist" containerID="80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.566279 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5"} err="failed to get container status \"80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5\": rpc error: code = NotFound desc = could not find container \"80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5\": container with ID starting with 80f77a14b7840e42fd072d52215bb1b3c1fc5e6948307194dbfc8278087967b5 not found: ID does not exist" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.566294 5028 scope.go:117] "RemoveContainer" containerID="e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a" Nov 23 06:53:54 crc kubenswrapper[5028]: E1123 06:53:54.566517 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a\": container with ID starting with e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a not found: ID does not exist" containerID="e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a" Nov 23 06:53:54 crc kubenswrapper[5028]: I1123 06:53:54.566543 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a"} err="failed to get container status \"e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a\": rpc error: code = NotFound desc = could not find container \"e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a\": container with ID starting with e8b0942ad4f5fd5a40b71d52771ebda2defcd381050bcd4e7f778caf338e186a not found: ID does not exist" Nov 23 06:53:55 crc kubenswrapper[5028]: I1123 06:53:55.067599 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" path="/var/lib/kubelet/pods/c8c63702-05c7-4e95-b682-cd9592ad6caa/volumes" Nov 23 06:53:57 crc kubenswrapper[5028]: I1123 06:53:57.160794 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wfgw7"] Nov 23 06:53:57 crc kubenswrapper[5028]: I1123 06:53:57.762458 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:53:58 crc kubenswrapper[5028]: I1123 06:53:58.037501 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.555725 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.595380 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.946713 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.946784 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.946838 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.947879 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:54:00 crc kubenswrapper[5028]: I1123 06:54:00.947997 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556" gracePeriod=600 Nov 23 06:54:01 crc kubenswrapper[5028]: I1123 06:54:01.816976 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzngh"] Nov 23 06:54:01 crc kubenswrapper[5028]: I1123 06:54:01.817250 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzngh" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="registry-server" containerID="cri-o://6a3099b9c736dd51f90de81b04f825aaa4b385b3b97da2dd6012a05adc4d2b77" gracePeriod=2 Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.018895 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-knhhl"] Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.019168 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-knhhl" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="registry-server" containerID="cri-o://5f5bd799a92f6914196c89faee495b886bfbb0094595d3cb90e7126a3544952e" gracePeriod=2 Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.561918 5028 generic.go:334] "Generic (PLEG): container finished" podID="8497e823-0924-49d2-a452-d0cb03f2926a" containerID="6a3099b9c736dd51f90de81b04f825aaa4b385b3b97da2dd6012a05adc4d2b77" exitCode=0 Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.562314 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzngh" event={"ID":"8497e823-0924-49d2-a452-d0cb03f2926a","Type":"ContainerDied","Data":"6a3099b9c736dd51f90de81b04f825aaa4b385b3b97da2dd6012a05adc4d2b77"} Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.564732 5028 generic.go:334] "Generic (PLEG): container finished" podID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerID="5f5bd799a92f6914196c89faee495b886bfbb0094595d3cb90e7126a3544952e" exitCode=0 Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.564794 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knhhl" event={"ID":"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd","Type":"ContainerDied","Data":"5f5bd799a92f6914196c89faee495b886bfbb0094595d3cb90e7126a3544952e"} Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.566459 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556" exitCode=0 Nov 23 06:54:02 crc kubenswrapper[5028]: I1123 06:54:02.566495 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.123932 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.165131 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.202048 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtvjv\" (UniqueName: \"kubernetes.io/projected/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-kube-api-access-wtvjv\") pod \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.202109 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl49r\" (UniqueName: \"kubernetes.io/projected/8497e823-0924-49d2-a452-d0cb03f2926a-kube-api-access-cl49r\") pod \"8497e823-0924-49d2-a452-d0cb03f2926a\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.202199 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-utilities\") pod \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.202240 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-catalog-content\") pod \"8497e823-0924-49d2-a452-d0cb03f2926a\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.202274 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-catalog-content\") pod \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\" (UID: \"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd\") " Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.202304 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-utilities\") pod \"8497e823-0924-49d2-a452-d0cb03f2926a\" (UID: \"8497e823-0924-49d2-a452-d0cb03f2926a\") " Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.203053 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-utilities" (OuterVolumeSpecName: "utilities") pod "8497e823-0924-49d2-a452-d0cb03f2926a" (UID: "8497e823-0924-49d2-a452-d0cb03f2926a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.208134 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8497e823-0924-49d2-a452-d0cb03f2926a-kube-api-access-cl49r" (OuterVolumeSpecName: "kube-api-access-cl49r") pod "8497e823-0924-49d2-a452-d0cb03f2926a" (UID: "8497e823-0924-49d2-a452-d0cb03f2926a"). InnerVolumeSpecName "kube-api-access-cl49r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.208241 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-kube-api-access-wtvjv" (OuterVolumeSpecName: "kube-api-access-wtvjv") pod "a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" (UID: "a5abed22-89e8-4b23-b9c0-9d5890d0b8fd"). InnerVolumeSpecName "kube-api-access-wtvjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.220976 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-utilities" (OuterVolumeSpecName: "utilities") pod "a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" (UID: "a5abed22-89e8-4b23-b9c0-9d5890d0b8fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.258666 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8497e823-0924-49d2-a452-d0cb03f2926a" (UID: "8497e823-0924-49d2-a452-d0cb03f2926a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.268549 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" (UID: "a5abed22-89e8-4b23-b9c0-9d5890d0b8fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.303298 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.303334 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.303345 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8497e823-0924-49d2-a452-d0cb03f2926a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.303357 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtvjv\" (UniqueName: \"kubernetes.io/projected/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-kube-api-access-wtvjv\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.303369 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl49r\" (UniqueName: \"kubernetes.io/projected/8497e823-0924-49d2-a452-d0cb03f2926a-kube-api-access-cl49r\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.303377 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.581270 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"b484d7d14b5eba38f48f7afe610bbdd1e7f2ac34f5b939229fb409571cfc4e5a"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.582902 5028 generic.go:334] "Generic (PLEG): container finished" podID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerID="f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95" exitCode=0 Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.582982 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fc6h" event={"ID":"ed371fd1-a120-4a2d-8868-03e1987260ef","Type":"ContainerDied","Data":"f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.587067 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerStarted","Data":"c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.589246 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzngh" event={"ID":"8497e823-0924-49d2-a452-d0cb03f2926a","Type":"ContainerDied","Data":"b8a4a961127bf60313f94751fb340782a73a267b433bea2826f498ae4e9afe04"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.589302 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzngh" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.589311 5028 scope.go:117] "RemoveContainer" containerID="6a3099b9c736dd51f90de81b04f825aaa4b385b3b97da2dd6012a05adc4d2b77" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.592029 5028 generic.go:334] "Generic (PLEG): container finished" podID="c4fcb369-0439-4325-8e71-aabe07feff87" containerID="2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de" exitCode=0 Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.592093 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tl2h" event={"ID":"c4fcb369-0439-4325-8e71-aabe07feff87","Type":"ContainerDied","Data":"2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.596863 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-knhhl" event={"ID":"a5abed22-89e8-4b23-b9c0-9d5890d0b8fd","Type":"ContainerDied","Data":"7eba66e5eba7755acc69e4330e49e499a71023b24d4a0e823098836058713698"} Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.597199 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-knhhl" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.657591 5028 scope.go:117] "RemoveContainer" containerID="ccc48a5bbb47928928b211677d632b3cdfc5e220122906238a5a00de0ec2d437" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.676846 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m42bg" podStartSLOduration=3.255466989 podStartE2EDuration="1m18.676829205s" podCreationTimestamp="2025-11-23 06:52:46 +0000 UTC" firstStartedPulling="2025-11-23 06:52:48.753233415 +0000 UTC m=+152.450638204" lastFinishedPulling="2025-11-23 06:54:04.174595641 +0000 UTC m=+227.872000420" observedRunningTime="2025-11-23 06:54:04.664068909 +0000 UTC m=+228.361473698" watchObservedRunningTime="2025-11-23 06:54:04.676829205 +0000 UTC m=+228.374233984" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.677932 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzngh"] Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.680833 5028 scope.go:117] "RemoveContainer" containerID="468b278623ec277a109e05e1d11c4a211487e59e8206e60f552e9a4cc149c2ea" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.686883 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzngh"] Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.694863 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-knhhl"] Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.699970 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-knhhl"] Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.700234 5028 scope.go:117] "RemoveContainer" containerID="5f5bd799a92f6914196c89faee495b886bfbb0094595d3cb90e7126a3544952e" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.712345 5028 scope.go:117] "RemoveContainer" containerID="6737cba6cf8eaa4efc37ab80da8fa623dc90805cd38584fb331b62ce23eab2fa" Nov 23 06:54:04 crc kubenswrapper[5028]: I1123 06:54:04.734516 5028 scope.go:117] "RemoveContainer" containerID="af7576ba595f439bd67d827547d9f90c2b9a5cca8b8f98ab0eb4a2aa9610e036" Nov 23 06:54:05 crc kubenswrapper[5028]: I1123 06:54:05.065242 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" path="/var/lib/kubelet/pods/8497e823-0924-49d2-a452-d0cb03f2926a/volumes" Nov 23 06:54:05 crc kubenswrapper[5028]: I1123 06:54:05.066131 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" path="/var/lib/kubelet/pods/a5abed22-89e8-4b23-b9c0-9d5890d0b8fd/volumes" Nov 23 06:54:05 crc kubenswrapper[5028]: I1123 06:54:05.607927 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tl2h" event={"ID":"c4fcb369-0439-4325-8e71-aabe07feff87","Type":"ContainerStarted","Data":"3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c"} Nov 23 06:54:05 crc kubenswrapper[5028]: I1123 06:54:05.613585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fc6h" event={"ID":"ed371fd1-a120-4a2d-8868-03e1987260ef","Type":"ContainerStarted","Data":"ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b"} Nov 23 06:54:05 crc kubenswrapper[5028]: I1123 06:54:05.636626 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6tl2h" podStartSLOduration=3.64343665 podStartE2EDuration="1m16.636607196s" podCreationTimestamp="2025-11-23 06:52:49 +0000 UTC" firstStartedPulling="2025-11-23 06:52:51.992613414 +0000 UTC m=+155.690018193" lastFinishedPulling="2025-11-23 06:54:04.98578396 +0000 UTC m=+228.683188739" observedRunningTime="2025-11-23 06:54:05.631876849 +0000 UTC m=+229.329281648" watchObservedRunningTime="2025-11-23 06:54:05.636607196 +0000 UTC m=+229.334011975" Nov 23 06:54:05 crc kubenswrapper[5028]: I1123 06:54:05.661840 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4fc6h" podStartSLOduration=2.286934542 podStartE2EDuration="1m16.661818391s" podCreationTimestamp="2025-11-23 06:52:49 +0000 UTC" firstStartedPulling="2025-11-23 06:52:50.969492776 +0000 UTC m=+154.666897555" lastFinishedPulling="2025-11-23 06:54:05.344376635 +0000 UTC m=+229.041781404" observedRunningTime="2025-11-23 06:54:05.659001191 +0000 UTC m=+229.356405970" watchObservedRunningTime="2025-11-23 06:54:05.661818391 +0000 UTC m=+229.359223170" Nov 23 06:54:07 crc kubenswrapper[5028]: I1123 06:54:07.342008 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:54:07 crc kubenswrapper[5028]: I1123 06:54:07.343182 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:54:07 crc kubenswrapper[5028]: I1123 06:54:07.406224 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:54:09 crc kubenswrapper[5028]: I1123 06:54:09.517415 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:54:09 crc kubenswrapper[5028]: I1123 06:54:09.517726 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:54:09 crc kubenswrapper[5028]: I1123 06:54:09.557494 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:54:09 crc kubenswrapper[5028]: I1123 06:54:09.936519 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:54:09 crc kubenswrapper[5028]: I1123 06:54:09.936580 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:54:09 crc kubenswrapper[5028]: I1123 06:54:09.978482 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:54:10 crc kubenswrapper[5028]: I1123 06:54:10.670978 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:54:11 crc kubenswrapper[5028]: I1123 06:54:11.418207 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tl2h"] Nov 23 06:54:12 crc kubenswrapper[5028]: I1123 06:54:12.649420 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6tl2h" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="registry-server" containerID="cri-o://3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c" gracePeriod=2 Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.079760 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.214981 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdz2r\" (UniqueName: \"kubernetes.io/projected/c4fcb369-0439-4325-8e71-aabe07feff87-kube-api-access-qdz2r\") pod \"c4fcb369-0439-4325-8e71-aabe07feff87\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.215136 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-utilities\") pod \"c4fcb369-0439-4325-8e71-aabe07feff87\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.215227 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-catalog-content\") pod \"c4fcb369-0439-4325-8e71-aabe07feff87\" (UID: \"c4fcb369-0439-4325-8e71-aabe07feff87\") " Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.216111 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-utilities" (OuterVolumeSpecName: "utilities") pod "c4fcb369-0439-4325-8e71-aabe07feff87" (UID: "c4fcb369-0439-4325-8e71-aabe07feff87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.222132 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fcb369-0439-4325-8e71-aabe07feff87-kube-api-access-qdz2r" (OuterVolumeSpecName: "kube-api-access-qdz2r") pod "c4fcb369-0439-4325-8e71-aabe07feff87" (UID: "c4fcb369-0439-4325-8e71-aabe07feff87"). InnerVolumeSpecName "kube-api-access-qdz2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.245504 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4fcb369-0439-4325-8e71-aabe07feff87" (UID: "c4fcb369-0439-4325-8e71-aabe07feff87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.317248 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.317300 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4fcb369-0439-4325-8e71-aabe07feff87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.317324 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdz2r\" (UniqueName: \"kubernetes.io/projected/c4fcb369-0439-4325-8e71-aabe07feff87-kube-api-access-qdz2r\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.659617 5028 generic.go:334] "Generic (PLEG): container finished" podID="c4fcb369-0439-4325-8e71-aabe07feff87" containerID="3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c" exitCode=0 Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.659674 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tl2h" event={"ID":"c4fcb369-0439-4325-8e71-aabe07feff87","Type":"ContainerDied","Data":"3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c"} Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.659689 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6tl2h" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.659714 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6tl2h" event={"ID":"c4fcb369-0439-4325-8e71-aabe07feff87","Type":"ContainerDied","Data":"80ac5b3d8871ed1b7012566ce74bb3154f03d2ad11414d92c1b52145ed0374fe"} Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.659737 5028 scope.go:117] "RemoveContainer" containerID="3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.695585 5028 scope.go:117] "RemoveContainer" containerID="2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.696438 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tl2h"] Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.703900 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6tl2h"] Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.716694 5028 scope.go:117] "RemoveContainer" containerID="47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.738164 5028 scope.go:117] "RemoveContainer" containerID="3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c" Nov 23 06:54:13 crc kubenswrapper[5028]: E1123 06:54:13.738874 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c\": container with ID starting with 3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c not found: ID does not exist" containerID="3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.739028 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c"} err="failed to get container status \"3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c\": rpc error: code = NotFound desc = could not find container \"3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c\": container with ID starting with 3bd7800105c1d0842e73fc09507efdff0818fadf9d59c4f41e2b3b2a256b5b2c not found: ID does not exist" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.739141 5028 scope.go:117] "RemoveContainer" containerID="2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de" Nov 23 06:54:13 crc kubenswrapper[5028]: E1123 06:54:13.739660 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de\": container with ID starting with 2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de not found: ID does not exist" containerID="2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.739750 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de"} err="failed to get container status \"2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de\": rpc error: code = NotFound desc = could not find container \"2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de\": container with ID starting with 2c64bcf74ab5fb299acec2dadeb8ff8937f5abdc6dbbc0d860eedf209dfef7de not found: ID does not exist" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.739816 5028 scope.go:117] "RemoveContainer" containerID="47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe" Nov 23 06:54:13 crc kubenswrapper[5028]: E1123 06:54:13.740325 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe\": container with ID starting with 47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe not found: ID does not exist" containerID="47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe" Nov 23 06:54:13 crc kubenswrapper[5028]: I1123 06:54:13.740393 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe"} err="failed to get container status \"47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe\": rpc error: code = NotFound desc = could not find container \"47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe\": container with ID starting with 47dfbf63d130f380272bab00de797663ba2ca78bdd7725e09bb9096a4d6f7dbe not found: ID does not exist" Nov 23 06:54:15 crc kubenswrapper[5028]: I1123 06:54:15.064672 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" path="/var/lib/kubelet/pods/c4fcb369-0439-4325-8e71-aabe07feff87/volumes" Nov 23 06:54:17 crc kubenswrapper[5028]: I1123 06:54:17.386918 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:54:19 crc kubenswrapper[5028]: I1123 06:54:19.560654 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.197558 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" containerID="cri-o://6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2" gracePeriod=15 Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.614398 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634386 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-service-ca\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634447 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-policies\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634471 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-idp-0-file-data\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634502 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zcm4\" (UniqueName: \"kubernetes.io/projected/48e2ebf7-77fd-43a3-9e8b-d89458a00707-kube-api-access-7zcm4\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634549 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-error\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634573 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-session\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634604 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-login\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634630 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-serving-cert\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634658 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-ocp-branding-template\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634688 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-dir\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634734 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-trusted-ca-bundle\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634758 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-provider-selection\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634792 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-router-certs\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.634813 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-cliconfig\") pod \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\" (UID: \"48e2ebf7-77fd-43a3-9e8b-d89458a00707\") " Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.636009 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.636384 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.636715 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.638320 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.640891 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.652180 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.655710 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.656246 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.656412 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48e2ebf7-77fd-43a3-9e8b-d89458a00707-kube-api-access-7zcm4" (OuterVolumeSpecName: "kube-api-access-7zcm4") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "kube-api-access-7zcm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.656833 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f94556d49-rm6np"] Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657094 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bde7d8f-55c3-419c-8d1a-e4efb25d5640" containerName="pruner" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657114 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bde7d8f-55c3-419c-8d1a-e4efb25d5640" containerName="pruner" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657128 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657138 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657149 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657184 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657196 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657205 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657217 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657225 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657243 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657251 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657260 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657268 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657281 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657289 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="extract-utilities" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657300 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657360 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657375 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657384 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657397 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09347c11-c01c-4e98-9e13-9fdf9ed45044" containerName="pruner" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657405 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="09347c11-c01c-4e98-9e13-9fdf9ed45044" containerName="pruner" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657444 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab3473d-0543-4160-8ad4-f262ec89e82b" containerName="collect-profiles" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657453 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab3473d-0543-4160-8ad4-f262ec89e82b" containerName="collect-profiles" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657463 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657470 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657486 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657520 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657537 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657545 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="extract-content" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.657556 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657564 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657736 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8497e823-0924-49d2-a452-d0cb03f2926a" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657778 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerName="oauth-openshift" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657791 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c63702-05c7-4e95-b682-cd9592ad6caa" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657802 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab3473d-0543-4160-8ad4-f262ec89e82b" containerName="collect-profiles" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657817 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5abed22-89e8-4b23-b9c0-9d5890d0b8fd" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657829 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="09347c11-c01c-4e98-9e13-9fdf9ed45044" containerName="pruner" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657865 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fcb369-0439-4325-8e71-aabe07feff87" containerName="registry-server" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.657879 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bde7d8f-55c3-419c-8d1a-e4efb25d5640" containerName="pruner" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.658560 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.659287 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.659691 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.659886 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.660304 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.665595 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f94556d49-rm6np"] Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.666071 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "48e2ebf7-77fd-43a3-9e8b-d89458a00707" (UID: "48e2ebf7-77fd-43a3-9e8b-d89458a00707"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.708290 5028 generic.go:334] "Generic (PLEG): container finished" podID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" containerID="6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2" exitCode=0 Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.708334 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" event={"ID":"48e2ebf7-77fd-43a3-9e8b-d89458a00707","Type":"ContainerDied","Data":"6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2"} Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.708353 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.708372 5028 scope.go:117] "RemoveContainer" containerID="6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.708362 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wfgw7" event={"ID":"48e2ebf7-77fd-43a3-9e8b-d89458a00707","Type":"ContainerDied","Data":"26d2fdafd5686e68b91647dc2773bd8a5022e0b132f51540333930b7263d5467"} Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.733157 5028 scope.go:117] "RemoveContainer" containerID="6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2" Nov 23 06:54:22 crc kubenswrapper[5028]: E1123 06:54:22.738145 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2\": container with ID starting with 6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2 not found: ID does not exist" containerID="6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.738209 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2"} err="failed to get container status \"6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2\": rpc error: code = NotFound desc = could not find container \"6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2\": container with ID starting with 6934796e013e5262b4fd89628661438d752dffbb3f3791d3a08f7cdb303700d2 not found: ID does not exist" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739157 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-login\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739189 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739224 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-audit-dir\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739267 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739314 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739367 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-audit-policies\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739404 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739429 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739455 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739484 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739512 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-session\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739541 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739564 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-error\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739592 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xknp5\" (UniqueName: \"kubernetes.io/projected/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-kube-api-access-xknp5\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739649 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739661 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739672 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739687 5028 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739698 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739711 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739721 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739735 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739745 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739754 5028 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/48e2ebf7-77fd-43a3-9e8b-d89458a00707-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739767 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739777 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zcm4\" (UniqueName: \"kubernetes.io/projected/48e2ebf7-77fd-43a3-9e8b-d89458a00707-kube-api-access-7zcm4\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739788 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739797 5028 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/48e2ebf7-77fd-43a3-9e8b-d89458a00707-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.739913 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wfgw7"] Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.742248 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wfgw7"] Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840387 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-login\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840434 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840464 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-audit-dir\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840516 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-audit-policies\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840556 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840573 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840589 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840608 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840631 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-session\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840647 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840661 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-error\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.840682 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xknp5\" (UniqueName: \"kubernetes.io/projected/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-kube-api-access-xknp5\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.841573 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.841619 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-audit-dir\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.841836 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.842189 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.842385 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-audit-policies\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.844032 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.844593 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.844940 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-session\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.845007 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-error\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.845178 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-login\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.845152 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.845376 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.846177 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.855666 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xknp5\" (UniqueName: \"kubernetes.io/projected/39a691f3-142b-4ab5-bbe5-e9cf62e60d81-kube-api-access-xknp5\") pod \"oauth-openshift-6f94556d49-rm6np\" (UID: \"39a691f3-142b-4ab5-bbe5-e9cf62e60d81\") " pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:22 crc kubenswrapper[5028]: I1123 06:54:22.998883 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:23 crc kubenswrapper[5028]: I1123 06:54:23.061694 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48e2ebf7-77fd-43a3-9e8b-d89458a00707" path="/var/lib/kubelet/pods/48e2ebf7-77fd-43a3-9e8b-d89458a00707/volumes" Nov 23 06:54:23 crc kubenswrapper[5028]: I1123 06:54:23.214066 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f94556d49-rm6np"] Nov 23 06:54:23 crc kubenswrapper[5028]: I1123 06:54:23.714572 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" event={"ID":"39a691f3-142b-4ab5-bbe5-e9cf62e60d81","Type":"ContainerStarted","Data":"3b49fb7c75705e290e56134f4c6f3228042996c592a879f97c00c657c9c750cb"} Nov 23 06:54:23 crc kubenswrapper[5028]: I1123 06:54:23.715084 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" event={"ID":"39a691f3-142b-4ab5-bbe5-e9cf62e60d81","Type":"ContainerStarted","Data":"ca66fe5d3933e067d90f6a8e0c6e213a04e776312b4ac627ffb0e28fabc3e6fe"} Nov 23 06:54:23 crc kubenswrapper[5028]: I1123 06:54:23.715102 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:23 crc kubenswrapper[5028]: I1123 06:54:23.734091 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" podStartSLOduration=26.734066533 podStartE2EDuration="26.734066533s" podCreationTimestamp="2025-11-23 06:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:54:23.730911635 +0000 UTC m=+247.428316414" watchObservedRunningTime="2025-11-23 06:54:23.734066533 +0000 UTC m=+247.431471312" Nov 23 06:54:24 crc kubenswrapper[5028]: I1123 06:54:24.120977 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f94556d49-rm6np" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.208357 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m42bg"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.213201 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ljts"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.213499 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ljts" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="registry-server" containerID="cri-o://b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c" gracePeriod=30 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.214111 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m42bg" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="registry-server" containerID="cri-o://c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d" gracePeriod=30 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.223636 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzg2c"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.223841 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerName="marketplace-operator" containerID="cri-o://4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44" gracePeriod=30 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.230548 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fc6h"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.230819 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4fc6h" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="registry-server" containerID="cri-o://ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b" gracePeriod=30 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.248096 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cp5nv"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.248397 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cp5nv" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="registry-server" containerID="cri-o://63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" gracePeriod=30 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.250720 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sphk5"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.251663 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.256891 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sphk5"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.370888 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spdmd\" (UniqueName: \"kubernetes.io/projected/29dda505-6228-430b-8c95-89713ee51f01-kube-api-access-spdmd\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.371007 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29dda505-6228-430b-8c95-89713ee51f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.371089 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29dda505-6228-430b-8c95-89713ee51f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.472354 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spdmd\" (UniqueName: \"kubernetes.io/projected/29dda505-6228-430b-8c95-89713ee51f01-kube-api-access-spdmd\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.472409 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29dda505-6228-430b-8c95-89713ee51f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.472509 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29dda505-6228-430b-8c95-89713ee51f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.474843 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29dda505-6228-430b-8c95-89713ee51f01-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.480034 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29dda505-6228-430b-8c95-89713ee51f01-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.489498 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spdmd\" (UniqueName: \"kubernetes.io/projected/29dda505-6228-430b-8c95-89713ee51f01-kube-api-access-spdmd\") pod \"marketplace-operator-79b997595-sphk5\" (UID: \"29dda505-6228-430b-8c95-89713ee51f01\") " pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.515376 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc is running failed: container process not found" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.516678 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc is running failed: container process not found" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.517006 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc is running failed: container process not found" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" cmd=["grpc_health_probe","-addr=:50051"] Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.517053 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-cp5nv" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="registry-server" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.648641 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.659579 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.717718 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.723691 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.745317 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.765430 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776167 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ptjs\" (UniqueName: \"kubernetes.io/projected/16fab255-3f9b-4e01-af32-4fa824a86807-kube-api-access-4ptjs\") pod \"16fab255-3f9b-4e01-af32-4fa824a86807\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776220 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-catalog-content\") pod \"266642af-8ffc-454c-b11e-d81483412956\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776245 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9t4d\" (UniqueName: \"kubernetes.io/projected/ed371fd1-a120-4a2d-8868-03e1987260ef-kube-api-access-f9t4d\") pod \"ed371fd1-a120-4a2d-8868-03e1987260ef\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776271 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-catalog-content\") pod \"16fab255-3f9b-4e01-af32-4fa824a86807\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776290 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-catalog-content\") pod \"8bcb40b3-8fab-431d-a9a7-f85a23090456\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776316 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-utilities\") pod \"266642af-8ffc-454c-b11e-d81483412956\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776336 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-trusted-ca\") pod \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776351 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-utilities\") pod \"16fab255-3f9b-4e01-af32-4fa824a86807\" (UID: \"16fab255-3f9b-4e01-af32-4fa824a86807\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776370 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-operator-metrics\") pod \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776386 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px4hm\" (UniqueName: \"kubernetes.io/projected/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-kube-api-access-px4hm\") pod \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\" (UID: \"012b6f83-ae0e-4a83-b806-6634bb4c1f4a\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776415 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz62m\" (UniqueName: \"kubernetes.io/projected/266642af-8ffc-454c-b11e-d81483412956-kube-api-access-pz62m\") pod \"266642af-8ffc-454c-b11e-d81483412956\" (UID: \"266642af-8ffc-454c-b11e-d81483412956\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776438 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-utilities\") pod \"8bcb40b3-8fab-431d-a9a7-f85a23090456\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776456 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-catalog-content\") pod \"ed371fd1-a120-4a2d-8868-03e1987260ef\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776477 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btn5f\" (UniqueName: \"kubernetes.io/projected/8bcb40b3-8fab-431d-a9a7-f85a23090456-kube-api-access-btn5f\") pod \"8bcb40b3-8fab-431d-a9a7-f85a23090456\" (UID: \"8bcb40b3-8fab-431d-a9a7-f85a23090456\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.776496 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-utilities\") pod \"ed371fd1-a120-4a2d-8868-03e1987260ef\" (UID: \"ed371fd1-a120-4a2d-8868-03e1987260ef\") " Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.780557 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fab255-3f9b-4e01-af32-4fa824a86807-kube-api-access-4ptjs" (OuterVolumeSpecName: "kube-api-access-4ptjs") pod "16fab255-3f9b-4e01-af32-4fa824a86807" (UID: "16fab255-3f9b-4e01-af32-4fa824a86807"). InnerVolumeSpecName "kube-api-access-4ptjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.784249 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-kube-api-access-px4hm" (OuterVolumeSpecName: "kube-api-access-px4hm") pod "012b6f83-ae0e-4a83-b806-6634bb4c1f4a" (UID: "012b6f83-ae0e-4a83-b806-6634bb4c1f4a"). InnerVolumeSpecName "kube-api-access-px4hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.784399 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bcb40b3-8fab-431d-a9a7-f85a23090456-kube-api-access-btn5f" (OuterVolumeSpecName: "kube-api-access-btn5f") pod "8bcb40b3-8fab-431d-a9a7-f85a23090456" (UID: "8bcb40b3-8fab-431d-a9a7-f85a23090456"). InnerVolumeSpecName "kube-api-access-btn5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.785243 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-utilities" (OuterVolumeSpecName: "utilities") pod "ed371fd1-a120-4a2d-8868-03e1987260ef" (UID: "ed371fd1-a120-4a2d-8868-03e1987260ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.788155 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-utilities" (OuterVolumeSpecName: "utilities") pod "8bcb40b3-8fab-431d-a9a7-f85a23090456" (UID: "8bcb40b3-8fab-431d-a9a7-f85a23090456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.790898 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "012b6f83-ae0e-4a83-b806-6634bb4c1f4a" (UID: "012b6f83-ae0e-4a83-b806-6634bb4c1f4a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.791969 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-utilities" (OuterVolumeSpecName: "utilities") pod "266642af-8ffc-454c-b11e-d81483412956" (UID: "266642af-8ffc-454c-b11e-d81483412956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.794074 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-utilities" (OuterVolumeSpecName: "utilities") pod "16fab255-3f9b-4e01-af32-4fa824a86807" (UID: "16fab255-3f9b-4e01-af32-4fa824a86807"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.797601 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "012b6f83-ae0e-4a83-b806-6634bb4c1f4a" (UID: "012b6f83-ae0e-4a83-b806-6634bb4c1f4a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.799508 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266642af-8ffc-454c-b11e-d81483412956-kube-api-access-pz62m" (OuterVolumeSpecName: "kube-api-access-pz62m") pod "266642af-8ffc-454c-b11e-d81483412956" (UID: "266642af-8ffc-454c-b11e-d81483412956"). InnerVolumeSpecName "kube-api-access-pz62m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.805045 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed371fd1-a120-4a2d-8868-03e1987260ef-kube-api-access-f9t4d" (OuterVolumeSpecName: "kube-api-access-f9t4d") pod "ed371fd1-a120-4a2d-8868-03e1987260ef" (UID: "ed371fd1-a120-4a2d-8868-03e1987260ef"). InnerVolumeSpecName "kube-api-access-f9t4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.835397 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed371fd1-a120-4a2d-8868-03e1987260ef" (UID: "ed371fd1-a120-4a2d-8868-03e1987260ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.850397 5028 generic.go:334] "Generic (PLEG): container finished" podID="16fab255-3f9b-4e01-af32-4fa824a86807" containerID="b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c" exitCode=0 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.850689 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ljts" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.851268 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ljts" event={"ID":"16fab255-3f9b-4e01-af32-4fa824a86807","Type":"ContainerDied","Data":"b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.851490 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ljts" event={"ID":"16fab255-3f9b-4e01-af32-4fa824a86807","Type":"ContainerDied","Data":"da32b0a2dbb6cc02a3f06c2efddfd605d8bfe3565f2ad11b4e8cd30349e2bcbe"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.851577 5028 scope.go:117] "RemoveContainer" containerID="b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.856182 5028 generic.go:334] "Generic (PLEG): container finished" podID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerID="ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b" exitCode=0 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.856249 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fc6h" event={"ID":"ed371fd1-a120-4a2d-8868-03e1987260ef","Type":"ContainerDied","Data":"ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.856275 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fc6h" event={"ID":"ed371fd1-a120-4a2d-8868-03e1987260ef","Type":"ContainerDied","Data":"d624348e93d0f4a121b2ff80fc541e4e52effa3158bfce982baf4b90e2e88c6f"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.856332 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fc6h" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.863627 5028 generic.go:334] "Generic (PLEG): container finished" podID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" exitCode=0 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.864011 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5nv" event={"ID":"8bcb40b3-8fab-431d-a9a7-f85a23090456","Type":"ContainerDied","Data":"63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.864039 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cp5nv" event={"ID":"8bcb40b3-8fab-431d-a9a7-f85a23090456","Type":"ContainerDied","Data":"6bc309c2f6e923623bf929d0147c0e560c68728a6b55870578c14584b5cf45e3"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.864096 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cp5nv" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.867726 5028 generic.go:334] "Generic (PLEG): container finished" podID="266642af-8ffc-454c-b11e-d81483412956" containerID="c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d" exitCode=0 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.867782 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerDied","Data":"c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.867803 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m42bg" event={"ID":"266642af-8ffc-454c-b11e-d81483412956","Type":"ContainerDied","Data":"71ff5c287cc3652cc691396e027abaf7e4cd763d36edd6bec6998487ace1e45c"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.867815 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m42bg" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.876400 5028 scope.go:117] "RemoveContainer" containerID="0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.877958 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz62m\" (UniqueName: \"kubernetes.io/projected/266642af-8ffc-454c-b11e-d81483412956-kube-api-access-pz62m\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.877981 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.877991 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878000 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btn5f\" (UniqueName: \"kubernetes.io/projected/8bcb40b3-8fab-431d-a9a7-f85a23090456-kube-api-access-btn5f\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878010 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed371fd1-a120-4a2d-8868-03e1987260ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878018 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ptjs\" (UniqueName: \"kubernetes.io/projected/16fab255-3f9b-4e01-af32-4fa824a86807-kube-api-access-4ptjs\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878026 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9t4d\" (UniqueName: \"kubernetes.io/projected/ed371fd1-a120-4a2d-8868-03e1987260ef-kube-api-access-f9t4d\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878035 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878043 5028 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878051 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878060 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px4hm\" (UniqueName: \"kubernetes.io/projected/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-kube-api-access-px4hm\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878069 5028 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012b6f83-ae0e-4a83-b806-6634bb4c1f4a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878821 5028 generic.go:334] "Generic (PLEG): container finished" podID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerID="4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44" exitCode=0 Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878902 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" event={"ID":"012b6f83-ae0e-4a83-b806-6634bb4c1f4a","Type":"ContainerDied","Data":"4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.878927 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" event={"ID":"012b6f83-ae0e-4a83-b806-6634bb4c1f4a","Type":"ContainerDied","Data":"86540e52dd48ff3ce80fbf8ddf9821521e3dc03d4483fbd8744db9a93dd54c5b"} Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.879067 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fzg2c" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.887558 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fc6h"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.888651 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16fab255-3f9b-4e01-af32-4fa824a86807" (UID: "16fab255-3f9b-4e01-af32-4fa824a86807"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.892153 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fc6h"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.897637 5028 scope.go:117] "RemoveContainer" containerID="5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.916832 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "266642af-8ffc-454c-b11e-d81483412956" (UID: "266642af-8ffc-454c-b11e-d81483412956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.916912 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzg2c"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.920004 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fzg2c"] Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.922152 5028 scope.go:117] "RemoveContainer" containerID="b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.925518 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c\": container with ID starting with b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c not found: ID does not exist" containerID="b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.925566 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c"} err="failed to get container status \"b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c\": rpc error: code = NotFound desc = could not find container \"b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c\": container with ID starting with b7960248047d730161af0f552f325c5959190a918e895543d9c2e93066999b1c not found: ID does not exist" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.925590 5028 scope.go:117] "RemoveContainer" containerID="0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.926158 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277\": container with ID starting with 0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277 not found: ID does not exist" containerID="0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.926231 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277"} err="failed to get container status \"0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277\": rpc error: code = NotFound desc = could not find container \"0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277\": container with ID starting with 0488e1fa5b8f7e114275db7632cdcf5fafbdb07f464afd6721d4887a96cec277 not found: ID does not exist" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.926449 5028 scope.go:117] "RemoveContainer" containerID="5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.926720 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb\": container with ID starting with 5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb not found: ID does not exist" containerID="5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.926749 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb"} err="failed to get container status \"5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb\": rpc error: code = NotFound desc = could not find container \"5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb\": container with ID starting with 5a89f47a46eb37f6cb3dda9afca493d8dc5d08b2d6de82c2306180be4f2b0ecb not found: ID does not exist" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.926767 5028 scope.go:117] "RemoveContainer" containerID="ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.938755 5028 scope.go:117] "RemoveContainer" containerID="f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.948189 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bcb40b3-8fab-431d-a9a7-f85a23090456" (UID: "8bcb40b3-8fab-431d-a9a7-f85a23090456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.952497 5028 scope.go:117] "RemoveContainer" containerID="7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.963217 5028 scope.go:117] "RemoveContainer" containerID="ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.963566 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b\": container with ID starting with ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b not found: ID does not exist" containerID="ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.963601 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b"} err="failed to get container status \"ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b\": rpc error: code = NotFound desc = could not find container \"ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b\": container with ID starting with ed7e3c60b783e266cfef4f1fc7e3e1c135cc4014457d71754ffb83562e18376b not found: ID does not exist" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.963638 5028 scope.go:117] "RemoveContainer" containerID="f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.963913 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95\": container with ID starting with f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95 not found: ID does not exist" containerID="f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.963967 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95"} err="failed to get container status \"f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95\": rpc error: code = NotFound desc = could not find container \"f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95\": container with ID starting with f13572359202ed879b90c034bb5434604f822ead46732fc7a6c3b6a2766d8f95 not found: ID does not exist" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.963986 5028 scope.go:117] "RemoveContainer" containerID="7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b" Nov 23 06:54:40 crc kubenswrapper[5028]: E1123 06:54:40.964264 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b\": container with ID starting with 7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b not found: ID does not exist" containerID="7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.964288 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b"} err="failed to get container status \"7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b\": rpc error: code = NotFound desc = could not find container \"7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b\": container with ID starting with 7faed36690db903076bd2aa520cb53283a2d378e8a05e7417ca7f5ac8826813b not found: ID does not exist" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.964304 5028 scope.go:117] "RemoveContainer" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.978797 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/266642af-8ffc-454c-b11e-d81483412956-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.978833 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16fab255-3f9b-4e01-af32-4fa824a86807-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.978848 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bcb40b3-8fab-431d-a9a7-f85a23090456-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.980006 5028 scope.go:117] "RemoveContainer" containerID="164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb" Nov 23 06:54:40 crc kubenswrapper[5028]: I1123 06:54:40.996714 5028 scope.go:117] "RemoveContainer" containerID="2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.013213 5028 scope.go:117] "RemoveContainer" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.013599 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc\": container with ID starting with 63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc not found: ID does not exist" containerID="63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.013652 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc"} err="failed to get container status \"63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc\": rpc error: code = NotFound desc = could not find container \"63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc\": container with ID starting with 63495cd04bb52f3b0da3d7968cbbb8cffe52d972657f99244fa665ed4e2175cc not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.013680 5028 scope.go:117] "RemoveContainer" containerID="164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.014106 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb\": container with ID starting with 164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb not found: ID does not exist" containerID="164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.014138 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb"} err="failed to get container status \"164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb\": rpc error: code = NotFound desc = could not find container \"164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb\": container with ID starting with 164fdacdeac79a5fbf2fcd53073975cab88a53884cafe3695c1f55afd3cc90cb not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.014159 5028 scope.go:117] "RemoveContainer" containerID="2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.014511 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462\": container with ID starting with 2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462 not found: ID does not exist" containerID="2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.014554 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462"} err="failed to get container status \"2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462\": rpc error: code = NotFound desc = could not find container \"2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462\": container with ID starting with 2cef849898240618bb2a5935f8f4e01900ca188280eab79ba91150252966e462 not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.014674 5028 scope.go:117] "RemoveContainer" containerID="c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.030123 5028 scope.go:117] "RemoveContainer" containerID="9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.045280 5028 scope.go:117] "RemoveContainer" containerID="cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.060395 5028 scope.go:117] "RemoveContainer" containerID="c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.069267 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d\": container with ID starting with c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d not found: ID does not exist" containerID="c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.069389 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d"} err="failed to get container status \"c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d\": rpc error: code = NotFound desc = could not find container \"c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d\": container with ID starting with c6123fe3853a1a384b07feb6c19dbf38154bb6edee1763acb1d401fa7377439d not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.069489 5028 scope.go:117] "RemoveContainer" containerID="9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.070912 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0\": container with ID starting with 9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0 not found: ID does not exist" containerID="9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.070971 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0"} err="failed to get container status \"9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0\": rpc error: code = NotFound desc = could not find container \"9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0\": container with ID starting with 9ca3fca3bd8a4fe36f20b7db2f6ac53eb204d3dc63169589987b5022cba3f5c0 not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.070991 5028 scope.go:117] "RemoveContainer" containerID="cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.071652 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4\": container with ID starting with cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4 not found: ID does not exist" containerID="cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.071708 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4"} err="failed to get container status \"cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4\": rpc error: code = NotFound desc = could not find container \"cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4\": container with ID starting with cc0570c35fcd27d1913fdfc5b16adf4fe64ac237cd4bdc920070fb3a9a0532b4 not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.071735 5028 scope.go:117] "RemoveContainer" containerID="4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.076892 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" path="/var/lib/kubelet/pods/012b6f83-ae0e-4a83-b806-6634bb4c1f4a/volumes" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.077422 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" path="/var/lib/kubelet/pods/ed371fd1-a120-4a2d-8868-03e1987260ef/volumes" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.089723 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sphk5"] Nov 23 06:54:41 crc kubenswrapper[5028]: W1123 06:54:41.097594 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29dda505_6228_430b_8c95_89713ee51f01.slice/crio-01fdf31a8a5c64007872a59d5a6da47b50bfaf804e7fed0c6bd72e0ae73e3ffb WatchSource:0}: Error finding container 01fdf31a8a5c64007872a59d5a6da47b50bfaf804e7fed0c6bd72e0ae73e3ffb: Status 404 returned error can't find the container with id 01fdf31a8a5c64007872a59d5a6da47b50bfaf804e7fed0c6bd72e0ae73e3ffb Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.102623 5028 scope.go:117] "RemoveContainer" containerID="4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.103102 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44\": container with ID starting with 4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44 not found: ID does not exist" containerID="4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.103146 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44"} err="failed to get container status \"4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44\": rpc error: code = NotFound desc = could not find container \"4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44\": container with ID starting with 4fb0920f69856f6aea949d31cefedb2a74450e3f3c948b2d238018093b1b0f44 not found: ID does not exist" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.177331 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ljts"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.180614 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ljts"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.184875 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cp5nv"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.187564 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cp5nv"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.200617 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m42bg"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.204261 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m42bg"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.678970 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xlftc"] Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679194 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerName="marketplace-operator" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679211 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerName="marketplace-operator" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679222 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679229 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679242 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679249 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679259 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679266 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679276 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679283 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679291 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679297 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679305 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679310 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679320 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679325 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679332 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679337 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679348 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679353 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679360 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679366 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679373 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679379 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="extract-utilities" Nov 23 06:54:41 crc kubenswrapper[5028]: E1123 06:54:41.679384 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679390 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="extract-content" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679471 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679483 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed371fd1-a120-4a2d-8868-03e1987260ef" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679506 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="012b6f83-ae0e-4a83-b806-6634bb4c1f4a" containerName="marketplace-operator" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679512 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.679520 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="266642af-8ffc-454c-b11e-d81483412956" containerName="registry-server" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.680205 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.682018 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.688675 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlftc"] Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.698739 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-utilities\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.698807 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llh59\" (UniqueName: \"kubernetes.io/projected/c3b796b1-e1ea-4f63-8639-dd8575ea6985-kube-api-access-llh59\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.698844 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-catalog-content\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.799772 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-catalog-content\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.799904 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-utilities\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.800003 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llh59\" (UniqueName: \"kubernetes.io/projected/c3b796b1-e1ea-4f63-8639-dd8575ea6985-kube-api-access-llh59\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.800269 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-catalog-content\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.800319 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-utilities\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.823377 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llh59\" (UniqueName: \"kubernetes.io/projected/c3b796b1-e1ea-4f63-8639-dd8575ea6985-kube-api-access-llh59\") pod \"community-operators-xlftc\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.889875 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" event={"ID":"29dda505-6228-430b-8c95-89713ee51f01","Type":"ContainerStarted","Data":"6695dc1a81365ca70967cb1dcb3c819aaeb7d5b347376d7eb5100a903119308a"} Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.889921 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" event={"ID":"29dda505-6228-430b-8c95-89713ee51f01","Type":"ContainerStarted","Data":"01fdf31a8a5c64007872a59d5a6da47b50bfaf804e7fed0c6bd72e0ae73e3ffb"} Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.891123 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.893515 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.908821 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sphk5" podStartSLOduration=1.908799896 podStartE2EDuration="1.908799896s" podCreationTimestamp="2025-11-23 06:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:54:41.906435567 +0000 UTC m=+265.603840346" watchObservedRunningTime="2025-11-23 06:54:41.908799896 +0000 UTC m=+265.606204685" Nov 23 06:54:41 crc kubenswrapper[5028]: I1123 06:54:41.993467 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:42 crc kubenswrapper[5028]: I1123 06:54:42.370690 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlftc"] Nov 23 06:54:42 crc kubenswrapper[5028]: W1123 06:54:42.380889 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b796b1_e1ea_4f63_8639_dd8575ea6985.slice/crio-88b181da018898fc1c56237b6fa92ec2d8392bb54ed0f57675154a4cdba9aff5 WatchSource:0}: Error finding container 88b181da018898fc1c56237b6fa92ec2d8392bb54ed0f57675154a4cdba9aff5: Status 404 returned error can't find the container with id 88b181da018898fc1c56237b6fa92ec2d8392bb54ed0f57675154a4cdba9aff5 Nov 23 06:54:42 crc kubenswrapper[5028]: I1123 06:54:42.900436 5028 generic.go:334] "Generic (PLEG): container finished" podID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerID="2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f" exitCode=0 Nov 23 06:54:42 crc kubenswrapper[5028]: I1123 06:54:42.900602 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerDied","Data":"2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f"} Nov 23 06:54:42 crc kubenswrapper[5028]: I1123 06:54:42.900715 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerStarted","Data":"88b181da018898fc1c56237b6fa92ec2d8392bb54ed0f57675154a4cdba9aff5"} Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.059936 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fab255-3f9b-4e01-af32-4fa824a86807" path="/var/lib/kubelet/pods/16fab255-3f9b-4e01-af32-4fa824a86807/volumes" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.060776 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266642af-8ffc-454c-b11e-d81483412956" path="/var/lib/kubelet/pods/266642af-8ffc-454c-b11e-d81483412956/volumes" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.061358 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bcb40b3-8fab-431d-a9a7-f85a23090456" path="/var/lib/kubelet/pods/8bcb40b3-8fab-431d-a9a7-f85a23090456/volumes" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.479729 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xjfv"] Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.481064 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.484345 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.488761 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xjfv"] Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.518779 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vcmw\" (UniqueName: \"kubernetes.io/projected/7dc64f53-e685-41c6-bf82-7448a3dd4875-kube-api-access-9vcmw\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.518828 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc64f53-e685-41c6-bf82-7448a3dd4875-catalog-content\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.518890 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc64f53-e685-41c6-bf82-7448a3dd4875-utilities\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.619321 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vcmw\" (UniqueName: \"kubernetes.io/projected/7dc64f53-e685-41c6-bf82-7448a3dd4875-kube-api-access-9vcmw\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.619363 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc64f53-e685-41c6-bf82-7448a3dd4875-catalog-content\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.619386 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc64f53-e685-41c6-bf82-7448a3dd4875-utilities\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.619788 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc64f53-e685-41c6-bf82-7448a3dd4875-catalog-content\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.619847 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc64f53-e685-41c6-bf82-7448a3dd4875-utilities\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.636860 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vcmw\" (UniqueName: \"kubernetes.io/projected/7dc64f53-e685-41c6-bf82-7448a3dd4875-kube-api-access-9vcmw\") pod \"redhat-marketplace-6xjfv\" (UID: \"7dc64f53-e685-41c6-bf82-7448a3dd4875\") " pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.800556 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.909489 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerStarted","Data":"c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c"} Nov 23 06:54:43 crc kubenswrapper[5028]: I1123 06:54:43.988264 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xjfv"] Nov 23 06:54:44 crc kubenswrapper[5028]: W1123 06:54:44.017384 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dc64f53_e685_41c6_bf82_7448a3dd4875.slice/crio-0d5d9be7f5e7acdcf4aebffbeac8231592bb44a3d23d16062b8ca64730fb770a WatchSource:0}: Error finding container 0d5d9be7f5e7acdcf4aebffbeac8231592bb44a3d23d16062b8ca64730fb770a: Status 404 returned error can't find the container with id 0d5d9be7f5e7acdcf4aebffbeac8231592bb44a3d23d16062b8ca64730fb770a Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.078984 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qdghg"] Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.082833 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.086151 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.094308 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdghg"] Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.227619 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgtcm\" (UniqueName: \"kubernetes.io/projected/7708271d-af3b-49ce-b67e-d6fffd0116d8-kube-api-access-jgtcm\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.227709 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708271d-af3b-49ce-b67e-d6fffd0116d8-catalog-content\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.227754 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708271d-af3b-49ce-b67e-d6fffd0116d8-utilities\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.328849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgtcm\" (UniqueName: \"kubernetes.io/projected/7708271d-af3b-49ce-b67e-d6fffd0116d8-kube-api-access-jgtcm\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.328901 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708271d-af3b-49ce-b67e-d6fffd0116d8-catalog-content\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.328927 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708271d-af3b-49ce-b67e-d6fffd0116d8-utilities\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.330338 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7708271d-af3b-49ce-b67e-d6fffd0116d8-utilities\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.330336 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7708271d-af3b-49ce-b67e-d6fffd0116d8-catalog-content\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.345899 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgtcm\" (UniqueName: \"kubernetes.io/projected/7708271d-af3b-49ce-b67e-d6fffd0116d8-kube-api-access-jgtcm\") pod \"redhat-operators-qdghg\" (UID: \"7708271d-af3b-49ce-b67e-d6fffd0116d8\") " pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.407687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.775210 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qdghg"] Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.915880 5028 generic.go:334] "Generic (PLEG): container finished" podID="7dc64f53-e685-41c6-bf82-7448a3dd4875" containerID="5953a3a0db67a8acacba92ec3f5d75fa9ca8a40088a0b088cc87da66f7f715f3" exitCode=0 Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.915932 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xjfv" event={"ID":"7dc64f53-e685-41c6-bf82-7448a3dd4875","Type":"ContainerDied","Data":"5953a3a0db67a8acacba92ec3f5d75fa9ca8a40088a0b088cc87da66f7f715f3"} Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.916005 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xjfv" event={"ID":"7dc64f53-e685-41c6-bf82-7448a3dd4875","Type":"ContainerStarted","Data":"0d5d9be7f5e7acdcf4aebffbeac8231592bb44a3d23d16062b8ca64730fb770a"} Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.918126 5028 generic.go:334] "Generic (PLEG): container finished" podID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerID="c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c" exitCode=0 Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.918184 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerDied","Data":"c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c"} Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.919887 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdghg" event={"ID":"7708271d-af3b-49ce-b67e-d6fffd0116d8","Type":"ContainerStarted","Data":"a8f19830de480d442fc8787e775fc42ddf40c9d944a9e1870e3167afe383e294"} Nov 23 06:54:44 crc kubenswrapper[5028]: I1123 06:54:44.920221 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdghg" event={"ID":"7708271d-af3b-49ce-b67e-d6fffd0116d8","Type":"ContainerStarted","Data":"cc6e6f2445f65630b0a75dbf2e0e73e4a4e0331b59377a8f1d72a66d6a320ec7"} Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.882369 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zs9s5"] Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.884189 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.890232 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.893854 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs9s5"] Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.925040 5028 generic.go:334] "Generic (PLEG): container finished" podID="7708271d-af3b-49ce-b67e-d6fffd0116d8" containerID="a8f19830de480d442fc8787e775fc42ddf40c9d944a9e1870e3167afe383e294" exitCode=0 Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.925133 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdghg" event={"ID":"7708271d-af3b-49ce-b67e-d6fffd0116d8","Type":"ContainerDied","Data":"a8f19830de480d442fc8787e775fc42ddf40c9d944a9e1870e3167afe383e294"} Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.927021 5028 generic.go:334] "Generic (PLEG): container finished" podID="7dc64f53-e685-41c6-bf82-7448a3dd4875" containerID="296bede3b487534b6490cfadfc97e91d1a80e74cd735d0d2fbdec75fc0742bc2" exitCode=0 Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.927049 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xjfv" event={"ID":"7dc64f53-e685-41c6-bf82-7448a3dd4875","Type":"ContainerDied","Data":"296bede3b487534b6490cfadfc97e91d1a80e74cd735d0d2fbdec75fc0742bc2"} Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.929598 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerStarted","Data":"fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b"} Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.948732 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgjl\" (UniqueName: \"kubernetes.io/projected/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-kube-api-access-xcgjl\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.948784 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-utilities\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.948855 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-catalog-content\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:45 crc kubenswrapper[5028]: I1123 06:54:45.957907 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xlftc" podStartSLOduration=2.498952557 podStartE2EDuration="4.957879248s" podCreationTimestamp="2025-11-23 06:54:41 +0000 UTC" firstStartedPulling="2025-11-23 06:54:42.903065489 +0000 UTC m=+266.600470268" lastFinishedPulling="2025-11-23 06:54:45.36199218 +0000 UTC m=+269.059396959" observedRunningTime="2025-11-23 06:54:45.956678157 +0000 UTC m=+269.654082926" watchObservedRunningTime="2025-11-23 06:54:45.957879248 +0000 UTC m=+269.655284017" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.049987 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-catalog-content\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.050090 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcgjl\" (UniqueName: \"kubernetes.io/projected/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-kube-api-access-xcgjl\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.050122 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-utilities\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.050634 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-utilities\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.050780 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-catalog-content\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.073102 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcgjl\" (UniqueName: \"kubernetes.io/projected/9cd6ae0b-ce93-4468-a204-e08c0781bfcb-kube-api-access-xcgjl\") pod \"certified-operators-zs9s5\" (UID: \"9cd6ae0b-ce93-4468-a204-e08c0781bfcb\") " pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.200400 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.621036 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zs9s5"] Nov 23 06:54:46 crc kubenswrapper[5028]: W1123 06:54:46.627232 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cd6ae0b_ce93_4468_a204_e08c0781bfcb.slice/crio-4c239d6b1094806a0ac5344480f8a3085630d733c1fcbb42a1959fddbc3b1504 WatchSource:0}: Error finding container 4c239d6b1094806a0ac5344480f8a3085630d733c1fcbb42a1959fddbc3b1504: Status 404 returned error can't find the container with id 4c239d6b1094806a0ac5344480f8a3085630d733c1fcbb42a1959fddbc3b1504 Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.936156 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdghg" event={"ID":"7708271d-af3b-49ce-b67e-d6fffd0116d8","Type":"ContainerStarted","Data":"5fc683a5bcb8a89c014ce4b8d6c891bb29175d0a7ad881bf01ca09059fa1a160"} Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.937613 5028 generic.go:334] "Generic (PLEG): container finished" podID="9cd6ae0b-ce93-4468-a204-e08c0781bfcb" containerID="46f777d675c3d9aefad72ec87c1d125d575ec919bfbf09b450912d4e9660e0df" exitCode=0 Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.937685 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs9s5" event={"ID":"9cd6ae0b-ce93-4468-a204-e08c0781bfcb","Type":"ContainerDied","Data":"46f777d675c3d9aefad72ec87c1d125d575ec919bfbf09b450912d4e9660e0df"} Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.937711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs9s5" event={"ID":"9cd6ae0b-ce93-4468-a204-e08c0781bfcb","Type":"ContainerStarted","Data":"4c239d6b1094806a0ac5344480f8a3085630d733c1fcbb42a1959fddbc3b1504"} Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.939716 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xjfv" event={"ID":"7dc64f53-e685-41c6-bf82-7448a3dd4875","Type":"ContainerStarted","Data":"b17c1819cee76bd52327212ded59e8b34e237ce2857ae1b933f8a01896b28b11"} Nov 23 06:54:46 crc kubenswrapper[5028]: I1123 06:54:46.995133 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xjfv" podStartSLOduration=2.578722547 podStartE2EDuration="3.995115291s" podCreationTimestamp="2025-11-23 06:54:43 +0000 UTC" firstStartedPulling="2025-11-23 06:54:44.917310701 +0000 UTC m=+268.614715480" lastFinishedPulling="2025-11-23 06:54:46.333703445 +0000 UTC m=+270.031108224" observedRunningTime="2025-11-23 06:54:46.994087835 +0000 UTC m=+270.691492624" watchObservedRunningTime="2025-11-23 06:54:46.995115291 +0000 UTC m=+270.692520070" Nov 23 06:54:47 crc kubenswrapper[5028]: I1123 06:54:47.946131 5028 generic.go:334] "Generic (PLEG): container finished" podID="7708271d-af3b-49ce-b67e-d6fffd0116d8" containerID="5fc683a5bcb8a89c014ce4b8d6c891bb29175d0a7ad881bf01ca09059fa1a160" exitCode=0 Nov 23 06:54:47 crc kubenswrapper[5028]: I1123 06:54:47.946257 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdghg" event={"ID":"7708271d-af3b-49ce-b67e-d6fffd0116d8","Type":"ContainerDied","Data":"5fc683a5bcb8a89c014ce4b8d6c891bb29175d0a7ad881bf01ca09059fa1a160"} Nov 23 06:54:47 crc kubenswrapper[5028]: I1123 06:54:47.951671 5028 generic.go:334] "Generic (PLEG): container finished" podID="9cd6ae0b-ce93-4468-a204-e08c0781bfcb" containerID="b36aa4760df370d5c1389b1453642f5b39c4fdead1acaeaeb79afe9c18f4692f" exitCode=0 Nov 23 06:54:47 crc kubenswrapper[5028]: I1123 06:54:47.951761 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs9s5" event={"ID":"9cd6ae0b-ce93-4468-a204-e08c0781bfcb","Type":"ContainerDied","Data":"b36aa4760df370d5c1389b1453642f5b39c4fdead1acaeaeb79afe9c18f4692f"} Nov 23 06:54:48 crc kubenswrapper[5028]: I1123 06:54:48.959305 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qdghg" event={"ID":"7708271d-af3b-49ce-b67e-d6fffd0116d8","Type":"ContainerStarted","Data":"b68d76d46ffaf678e2dd5abc0c13245708c561e87ca235a6f55db3aaa30e4ae2"} Nov 23 06:54:48 crc kubenswrapper[5028]: I1123 06:54:48.961092 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zs9s5" event={"ID":"9cd6ae0b-ce93-4468-a204-e08c0781bfcb","Type":"ContainerStarted","Data":"8342e92564f447616f0dc2772bcf2d36a11f289b7d6b218a675dfb215fdb6b9f"} Nov 23 06:54:48 crc kubenswrapper[5028]: I1123 06:54:48.979974 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qdghg" podStartSLOduration=2.537864797 podStartE2EDuration="4.979959164s" podCreationTimestamp="2025-11-23 06:54:44 +0000 UTC" firstStartedPulling="2025-11-23 06:54:45.926419047 +0000 UTC m=+269.623823826" lastFinishedPulling="2025-11-23 06:54:48.368513414 +0000 UTC m=+272.065918193" observedRunningTime="2025-11-23 06:54:48.977049551 +0000 UTC m=+272.674454340" watchObservedRunningTime="2025-11-23 06:54:48.979959164 +0000 UTC m=+272.677363943" Nov 23 06:54:48 crc kubenswrapper[5028]: I1123 06:54:48.997892 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zs9s5" podStartSLOduration=2.619823664 podStartE2EDuration="3.997874534s" podCreationTimestamp="2025-11-23 06:54:45 +0000 UTC" firstStartedPulling="2025-11-23 06:54:46.939029181 +0000 UTC m=+270.636433960" lastFinishedPulling="2025-11-23 06:54:48.317080051 +0000 UTC m=+272.014484830" observedRunningTime="2025-11-23 06:54:48.993794232 +0000 UTC m=+272.691199011" watchObservedRunningTime="2025-11-23 06:54:48.997874534 +0000 UTC m=+272.695279303" Nov 23 06:54:51 crc kubenswrapper[5028]: I1123 06:54:51.994644 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:51 crc kubenswrapper[5028]: I1123 06:54:51.994693 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:52 crc kubenswrapper[5028]: I1123 06:54:52.035861 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:53 crc kubenswrapper[5028]: I1123 06:54:53.022592 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xlftc" Nov 23 06:54:53 crc kubenswrapper[5028]: I1123 06:54:53.800824 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:53 crc kubenswrapper[5028]: I1123 06:54:53.801208 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:53 crc kubenswrapper[5028]: I1123 06:54:53.833837 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:54 crc kubenswrapper[5028]: I1123 06:54:54.023337 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xjfv" Nov 23 06:54:54 crc kubenswrapper[5028]: I1123 06:54:54.408577 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:54 crc kubenswrapper[5028]: I1123 06:54:54.409416 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:54 crc kubenswrapper[5028]: I1123 06:54:54.448057 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:55 crc kubenswrapper[5028]: I1123 06:54:55.051858 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qdghg" Nov 23 06:54:56 crc kubenswrapper[5028]: I1123 06:54:56.201035 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:56 crc kubenswrapper[5028]: I1123 06:54:56.201340 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:56 crc kubenswrapper[5028]: I1123 06:54:56.242648 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:54:57 crc kubenswrapper[5028]: I1123 06:54:57.036249 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zs9s5" Nov 23 06:55:16 crc kubenswrapper[5028]: I1123 06:55:16.809877 5028 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 23 06:56:30 crc kubenswrapper[5028]: I1123 06:56:30.947060 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:56:30 crc kubenswrapper[5028]: I1123 06:56:30.947559 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:57:00 crc kubenswrapper[5028]: I1123 06:57:00.946659 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:57:00 crc kubenswrapper[5028]: I1123 06:57:00.947306 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.129382 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t644r"] Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.130931 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.136103 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t644r"] Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243346 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d9n9\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-kube-api-access-6d9n9\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243433 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-bound-sa-token\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243454 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-registry-certificates\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243504 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-registry-tls\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243637 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243718 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243753 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.243800 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-trusted-ca\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.264255 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345344 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-registry-certificates\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345395 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-bound-sa-token\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345420 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-registry-tls\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345475 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345501 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-trusted-ca\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.345571 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d9n9\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-kube-api-access-6d9n9\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.346894 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.346925 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-trusted-ca\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.347540 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-registry-certificates\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.351221 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-registry-tls\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.351551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.360699 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-bound-sa-token\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.361185 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d9n9\" (UniqueName: \"kubernetes.io/projected/4a8a3587-b8b8-4798-9107-6dc2b5c9d39f-kube-api-access-6d9n9\") pod \"image-registry-66df7c8f76-t644r\" (UID: \"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f\") " pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.450563 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:13 crc kubenswrapper[5028]: I1123 06:57:13.846828 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-t644r"] Nov 23 06:57:14 crc kubenswrapper[5028]: I1123 06:57:14.777321 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" event={"ID":"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f","Type":"ContainerStarted","Data":"3718bb902892c849c53afbd6ecea8f60c3fa00688e7df13c2d23073c5d69404c"} Nov 23 06:57:14 crc kubenswrapper[5028]: I1123 06:57:14.777691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" event={"ID":"4a8a3587-b8b8-4798-9107-6dc2b5c9d39f","Type":"ContainerStarted","Data":"44ebdba10d47ab63cb10a9418d8e1903cf2ce4a5bff731793bfc1f26b5cf8b1b"} Nov 23 06:57:14 crc kubenswrapper[5028]: I1123 06:57:14.777709 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:14 crc kubenswrapper[5028]: I1123 06:57:14.797898 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" podStartSLOduration=1.797868204 podStartE2EDuration="1.797868204s" podCreationTimestamp="2025-11-23 06:57:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 06:57:14.792682399 +0000 UTC m=+418.490087178" watchObservedRunningTime="2025-11-23 06:57:14.797868204 +0000 UTC m=+418.495273013" Nov 23 06:57:30 crc kubenswrapper[5028]: I1123 06:57:30.946388 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 06:57:30 crc kubenswrapper[5028]: I1123 06:57:30.946940 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 06:57:30 crc kubenswrapper[5028]: I1123 06:57:30.947034 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 06:57:30 crc kubenswrapper[5028]: I1123 06:57:30.947566 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b484d7d14b5eba38f48f7afe610bbdd1e7f2ac34f5b939229fb409571cfc4e5a"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 06:57:30 crc kubenswrapper[5028]: I1123 06:57:30.947642 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://b484d7d14b5eba38f48f7afe610bbdd1e7f2ac34f5b939229fb409571cfc4e5a" gracePeriod=600 Nov 23 06:57:31 crc kubenswrapper[5028]: I1123 06:57:31.869193 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="b484d7d14b5eba38f48f7afe610bbdd1e7f2ac34f5b939229fb409571cfc4e5a" exitCode=0 Nov 23 06:57:31 crc kubenswrapper[5028]: I1123 06:57:31.869258 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"b484d7d14b5eba38f48f7afe610bbdd1e7f2ac34f5b939229fb409571cfc4e5a"} Nov 23 06:57:31 crc kubenswrapper[5028]: I1123 06:57:31.869563 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"51d7af075637cdb224580c90d38df94102b585b6fbc5d3e5527e95235bfff29d"} Nov 23 06:57:31 crc kubenswrapper[5028]: I1123 06:57:31.869581 5028 scope.go:117] "RemoveContainer" containerID="83a2bf62e2b02ed6ff1152d8de4feae01e1d53801704c93de938b56dad0f7556" Nov 23 06:57:33 crc kubenswrapper[5028]: I1123 06:57:33.462832 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-t644r" Nov 23 06:57:33 crc kubenswrapper[5028]: I1123 06:57:33.537992 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mmks"] Nov 23 06:57:58 crc kubenswrapper[5028]: I1123 06:57:58.586797 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" podUID="3ad0fd40-348f-46f6-87f8-001fc9918495" containerName="registry" containerID="cri-o://cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097" gracePeriod=30 Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.026510 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.033750 5028 generic.go:334] "Generic (PLEG): container finished" podID="3ad0fd40-348f-46f6-87f8-001fc9918495" containerID="cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097" exitCode=0 Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.033774 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" event={"ID":"3ad0fd40-348f-46f6-87f8-001fc9918495","Type":"ContainerDied","Data":"cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097"} Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.033791 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.033837 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8mmks" event={"ID":"3ad0fd40-348f-46f6-87f8-001fc9918495","Type":"ContainerDied","Data":"aca0d3a0d4b094fde3d74d330194d27787bef15fb6fde59bc64b59229e61447a"} Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.033862 5028 scope.go:117] "RemoveContainer" containerID="cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.055306 5028 scope.go:117] "RemoveContainer" containerID="cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097" Nov 23 06:57:59 crc kubenswrapper[5028]: E1123 06:57:59.055879 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097\": container with ID starting with cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097 not found: ID does not exist" containerID="cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.056163 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097"} err="failed to get container status \"cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097\": rpc error: code = NotFound desc = could not find container \"cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097\": container with ID starting with cc7b62197664da4293f25a0ab5655d7cdd961d97ee065829bf25335b54ff5097 not found: ID does not exist" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.203938 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-tls\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204230 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204282 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ad0fd40-348f-46f6-87f8-001fc9918495-installation-pull-secrets\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204307 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-certificates\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204336 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-trusted-ca\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204361 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ad0fd40-348f-46f6-87f8-001fc9918495-ca-trust-extracted\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204419 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-bound-sa-token\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.204447 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk94r\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-kube-api-access-kk94r\") pod \"3ad0fd40-348f-46f6-87f8-001fc9918495\" (UID: \"3ad0fd40-348f-46f6-87f8-001fc9918495\") " Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.206504 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.206590 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.212381 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-kube-api-access-kk94r" (OuterVolumeSpecName: "kube-api-access-kk94r") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "kube-api-access-kk94r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.212535 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.212551 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad0fd40-348f-46f6-87f8-001fc9918495-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.213526 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.216332 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.224269 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad0fd40-348f-46f6-87f8-001fc9918495-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3ad0fd40-348f-46f6-87f8-001fc9918495" (UID: "3ad0fd40-348f-46f6-87f8-001fc9918495"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305749 5028 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305820 5028 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ad0fd40-348f-46f6-87f8-001fc9918495-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305832 5028 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305842 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ad0fd40-348f-46f6-87f8-001fc9918495-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305852 5028 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ad0fd40-348f-46f6-87f8-001fc9918495-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305861 5028 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.305870 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk94r\" (UniqueName: \"kubernetes.io/projected/3ad0fd40-348f-46f6-87f8-001fc9918495-kube-api-access-kk94r\") on node \"crc\" DevicePath \"\"" Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.361977 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mmks"] Nov 23 06:57:59 crc kubenswrapper[5028]: I1123 06:57:59.365271 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8mmks"] Nov 23 06:58:01 crc kubenswrapper[5028]: I1123 06:58:01.066513 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad0fd40-348f-46f6-87f8-001fc9918495" path="/var/lib/kubelet/pods/3ad0fd40-348f-46f6-87f8-001fc9918495/volumes" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.143017 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf"] Nov 23 07:00:00 crc kubenswrapper[5028]: E1123 07:00:00.143806 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad0fd40-348f-46f6-87f8-001fc9918495" containerName="registry" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.143820 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad0fd40-348f-46f6-87f8-001fc9918495" containerName="registry" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.143979 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad0fd40-348f-46f6-87f8-001fc9918495" containerName="registry" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.144413 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.148844 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.149343 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.149338 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf"] Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.279059 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54ef0f81-8b76-4b7c-91f8-edb0791421c9-secret-volume\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.279130 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhgr\" (UniqueName: \"kubernetes.io/projected/54ef0f81-8b76-4b7c-91f8-edb0791421c9-kube-api-access-2mhgr\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.279214 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ef0f81-8b76-4b7c-91f8-edb0791421c9-config-volume\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.380343 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54ef0f81-8b76-4b7c-91f8-edb0791421c9-secret-volume\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.380711 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhgr\" (UniqueName: \"kubernetes.io/projected/54ef0f81-8b76-4b7c-91f8-edb0791421c9-kube-api-access-2mhgr\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.381110 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ef0f81-8b76-4b7c-91f8-edb0791421c9-config-volume\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.381834 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ef0f81-8b76-4b7c-91f8-edb0791421c9-config-volume\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.387986 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54ef0f81-8b76-4b7c-91f8-edb0791421c9-secret-volume\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.396474 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhgr\" (UniqueName: \"kubernetes.io/projected/54ef0f81-8b76-4b7c-91f8-edb0791421c9-kube-api-access-2mhgr\") pod \"collect-profiles-29398020-bbzgf\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.467266 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.857242 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf"] Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.945846 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:00:00 crc kubenswrapper[5028]: I1123 07:00:00.945907 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:00:01 crc kubenswrapper[5028]: I1123 07:00:01.172316 5028 generic.go:334] "Generic (PLEG): container finished" podID="54ef0f81-8b76-4b7c-91f8-edb0791421c9" containerID="3970b8797d02c8c366db59a03f6b97f46485da40662f21bfb4d83497d0380546" exitCode=0 Nov 23 07:00:01 crc kubenswrapper[5028]: I1123 07:00:01.172365 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" event={"ID":"54ef0f81-8b76-4b7c-91f8-edb0791421c9","Type":"ContainerDied","Data":"3970b8797d02c8c366db59a03f6b97f46485da40662f21bfb4d83497d0380546"} Nov 23 07:00:01 crc kubenswrapper[5028]: I1123 07:00:01.172440 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" event={"ID":"54ef0f81-8b76-4b7c-91f8-edb0791421c9","Type":"ContainerStarted","Data":"4babc1c48c569d3be94458cea0db2db30ed83aaaa461f4730dd1a708db402bed"} Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.400604 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.508603 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ef0f81-8b76-4b7c-91f8-edb0791421c9-config-volume\") pod \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.508697 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mhgr\" (UniqueName: \"kubernetes.io/projected/54ef0f81-8b76-4b7c-91f8-edb0791421c9-kube-api-access-2mhgr\") pod \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.508782 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54ef0f81-8b76-4b7c-91f8-edb0791421c9-secret-volume\") pod \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\" (UID: \"54ef0f81-8b76-4b7c-91f8-edb0791421c9\") " Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.510919 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54ef0f81-8b76-4b7c-91f8-edb0791421c9-config-volume" (OuterVolumeSpecName: "config-volume") pod "54ef0f81-8b76-4b7c-91f8-edb0791421c9" (UID: "54ef0f81-8b76-4b7c-91f8-edb0791421c9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.519310 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54ef0f81-8b76-4b7c-91f8-edb0791421c9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54ef0f81-8b76-4b7c-91f8-edb0791421c9" (UID: "54ef0f81-8b76-4b7c-91f8-edb0791421c9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.519357 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ef0f81-8b76-4b7c-91f8-edb0791421c9-kube-api-access-2mhgr" (OuterVolumeSpecName: "kube-api-access-2mhgr") pod "54ef0f81-8b76-4b7c-91f8-edb0791421c9" (UID: "54ef0f81-8b76-4b7c-91f8-edb0791421c9"). InnerVolumeSpecName "kube-api-access-2mhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.610737 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mhgr\" (UniqueName: \"kubernetes.io/projected/54ef0f81-8b76-4b7c-91f8-edb0791421c9-kube-api-access-2mhgr\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.610828 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54ef0f81-8b76-4b7c-91f8-edb0791421c9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:02 crc kubenswrapper[5028]: I1123 07:00:02.610845 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54ef0f81-8b76-4b7c-91f8-edb0791421c9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:00:03 crc kubenswrapper[5028]: I1123 07:00:03.187737 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" event={"ID":"54ef0f81-8b76-4b7c-91f8-edb0791421c9","Type":"ContainerDied","Data":"4babc1c48c569d3be94458cea0db2db30ed83aaaa461f4730dd1a708db402bed"} Nov 23 07:00:03 crc kubenswrapper[5028]: I1123 07:00:03.187781 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4babc1c48c569d3be94458cea0db2db30ed83aaaa461f4730dd1a708db402bed" Nov 23 07:00:03 crc kubenswrapper[5028]: I1123 07:00:03.187809 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf" Nov 23 07:00:30 crc kubenswrapper[5028]: I1123 07:00:30.946447 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:00:30 crc kubenswrapper[5028]: I1123 07:00:30.947143 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:01:00 crc kubenswrapper[5028]: I1123 07:01:00.947150 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:01:00 crc kubenswrapper[5028]: I1123 07:01:00.949938 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:01:00 crc kubenswrapper[5028]: I1123 07:01:00.950188 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:01:00 crc kubenswrapper[5028]: I1123 07:01:00.951137 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51d7af075637cdb224580c90d38df94102b585b6fbc5d3e5527e95235bfff29d"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:01:00 crc kubenswrapper[5028]: I1123 07:01:00.951395 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://51d7af075637cdb224580c90d38df94102b585b6fbc5d3e5527e95235bfff29d" gracePeriod=600 Nov 23 07:01:01 crc kubenswrapper[5028]: I1123 07:01:01.602579 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="51d7af075637cdb224580c90d38df94102b585b6fbc5d3e5527e95235bfff29d" exitCode=0 Nov 23 07:01:01 crc kubenswrapper[5028]: I1123 07:01:01.602673 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"51d7af075637cdb224580c90d38df94102b585b6fbc5d3e5527e95235bfff29d"} Nov 23 07:01:01 crc kubenswrapper[5028]: I1123 07:01:01.603132 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"7ae2ca72370a9e3e15bd4e9680a68748662e74c8a868e4dcea0405be9f5e30cb"} Nov 23 07:01:01 crc kubenswrapper[5028]: I1123 07:01:01.603167 5028 scope.go:117] "RemoveContainer" containerID="b484d7d14b5eba38f48f7afe610bbdd1e7f2ac34f5b939229fb409571cfc4e5a" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.469898 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xbtxp"] Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.470845 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-controller" containerID="cri-o://b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.471236 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="sbdb" containerID="cri-o://280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.471276 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="nbdb" containerID="cri-o://9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.471307 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="northd" containerID="cri-o://9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.471339 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.471368 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-node" containerID="cri-o://250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.471396 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-acl-logging" containerID="cri-o://cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.504677 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" containerID="cri-o://e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" gracePeriod=30 Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.788596 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/3.log" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.790360 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovn-acl-logging/0.log" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.790703 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovn-controller/0.log" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.791026 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.840941 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hrg5d"] Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841137 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ef0f81-8b76-4b7c-91f8-edb0791421c9" containerName="collect-profiles" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841198 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ef0f81-8b76-4b7c-91f8-edb0791421c9" containerName="collect-profiles" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841211 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-node" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841218 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-node" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841226 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841233 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841244 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841251 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841262 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841269 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841280 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841287 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841297 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kubecfg-setup" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841305 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kubecfg-setup" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841316 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="nbdb" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841321 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="nbdb" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841329 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="sbdb" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841335 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="sbdb" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841345 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="northd" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841351 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="northd" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841360 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841365 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841374 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-acl-logging" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841379 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-acl-logging" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841387 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841393 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841484 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="nbdb" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841499 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841512 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841520 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovn-acl-logging" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841534 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-node" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841544 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="kube-rbac-proxy-ovn-metrics" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841551 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841558 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="northd" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841570 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="sbdb" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841581 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841591 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841600 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ef0f81-8b76-4b7c-91f8-edb0791421c9" containerName="collect-profiles" Nov 23 07:02:36 crc kubenswrapper[5028]: E1123 07:02:36.841727 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841740 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.841871 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerName="ovnkube-controller" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843424 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843520 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdsqn\" (UniqueName: \"kubernetes.io/projected/68dc0fb8-309c-46ef-a4f8-f0eff3169061-kube-api-access-xdsqn\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843553 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-netd\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843573 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-log-socket\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843646 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovn-node-metrics-cert\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843662 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-env-overrides\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843675 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-openvswitch\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843690 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-kubelet\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843710 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-script-lib\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843728 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-etc-openvswitch\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843773 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-ovn-kubernetes\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843788 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-node-log\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843806 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-bin\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843827 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-systemd-units\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843853 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-config\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843866 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-netns\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843883 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-slash\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843899 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-systemd\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843917 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-ovn\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843932 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-var-lib-cni-networks-ovn-kubernetes\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.843973 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-var-lib-openvswitch\") pod \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\" (UID: \"68dc0fb8-309c-46ef-a4f8-f0eff3169061\") " Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844049 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844200 5028 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844236 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844260 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-node-log" (OuterVolumeSpecName: "node-log") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844276 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844293 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844654 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844689 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-log-socket" (OuterVolumeSpecName: "log-socket") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844780 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844871 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844891 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.844993 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.845419 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.845466 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.845486 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.845503 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-slash" (OuterVolumeSpecName: "host-slash") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.845919 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.845973 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.849922 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68dc0fb8-309c-46ef-a4f8-f0eff3169061-kube-api-access-xdsqn" (OuterVolumeSpecName: "kube-api-access-xdsqn") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "kube-api-access-xdsqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.849977 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.860422 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "68dc0fb8-309c-46ef-a4f8-f0eff3169061" (UID: "68dc0fb8-309c-46ef-a4f8-f0eff3169061"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.945172 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-etc-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.945592 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.945747 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-ovn\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.945887 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-run-netns\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946015 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-cni-netd\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946124 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-kubelet\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946228 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946362 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnc4\" (UniqueName: \"kubernetes.io/projected/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-kube-api-access-lbnc4\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946416 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-systemd-units\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946434 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovnkube-config\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946462 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-cni-bin\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946486 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-env-overrides\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946506 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-var-lib-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946523 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-slash\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946553 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-systemd\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946584 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-node-log\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946613 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-log-socket\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946635 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovn-node-metrics-cert\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946658 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovnkube-script-lib\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946676 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946739 5028 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-slash\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946749 5028 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946759 5028 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946769 5028 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946778 5028 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946790 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdsqn\" (UniqueName: \"kubernetes.io/projected/68dc0fb8-309c-46ef-a4f8-f0eff3169061-kube-api-access-xdsqn\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946800 5028 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946810 5028 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-log-socket\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946818 5028 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946827 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946836 5028 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946847 5028 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946855 5028 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946863 5028 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946872 5028 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-node-log\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946880 5028 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946888 5028 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946896 5028 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68dc0fb8-309c-46ef-a4f8-f0eff3169061-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:36 crc kubenswrapper[5028]: I1123 07:02:36.946904 5028 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68dc0fb8-309c-46ef-a4f8-f0eff3169061-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047728 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-var-lib-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047767 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-env-overrides\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047787 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-slash\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047806 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-systemd\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047824 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-node-log\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047843 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-log-socket\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047845 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-var-lib-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047857 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovn-node-metrics-cert\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047903 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovnkube-script-lib\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047925 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047921 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-systemd\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047970 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-etc-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.047992 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048007 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-node-log\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048016 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-ovn\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048035 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-etc-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048065 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-ovn\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048063 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048103 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-run-netns\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048102 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-run-openvswitch\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048086 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-run-netns\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048083 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-log-socket\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048160 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-slash\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048194 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-cni-netd\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048171 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-cni-netd\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048247 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048266 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-kubelet\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048290 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnc4\" (UniqueName: \"kubernetes.io/projected/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-kube-api-access-lbnc4\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048309 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-kubelet\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048314 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-systemd-units\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048330 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovnkube-config\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048355 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-cni-bin\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048434 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-cni-bin\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048289 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.048465 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-systemd-units\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.049002 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-env-overrides\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.049324 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovnkube-script-lib\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.049439 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovnkube-config\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.051880 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-ovn-node-metrics-cert\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.064074 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnc4\" (UniqueName: \"kubernetes.io/projected/52331cb0-aa28-4c0a-8ea7-13da876e7ef4-kube-api-access-lbnc4\") pod \"ovnkube-node-hrg5d\" (UID: \"52331cb0-aa28-4c0a-8ea7-13da876e7ef4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.165248 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.398918 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovnkube-controller/3.log" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.402090 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovn-acl-logging/0.log" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.402623 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xbtxp_68dc0fb8-309c-46ef-a4f8-f0eff3169061/ovn-controller/0.log" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403042 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403065 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403074 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403082 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403090 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403098 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403107 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" exitCode=143 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403114 5028 generic.go:334] "Generic (PLEG): container finished" podID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" exitCode=143 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403211 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403238 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403263 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403277 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403290 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403306 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403326 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403347 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403346 5028 scope.go:117] "RemoveContainer" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403355 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403437 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403462 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403470 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403477 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403484 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403492 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403518 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403550 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403559 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403567 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403575 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403583 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403595 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403602 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403610 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403616 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403623 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403644 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403653 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403660 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403666 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.403673 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404205 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404213 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404220 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404227 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404233 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404245 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xbtxp" event={"ID":"68dc0fb8-309c-46ef-a4f8-f0eff3169061","Type":"ContainerDied","Data":"a13fd0e49be3314d313ae5f386826636968ec7b14dd39fd80ce239279a41dda3"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404260 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404272 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404279 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404287 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404293 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404299 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404306 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404330 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404336 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.404342 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.420194 5028 generic.go:334] "Generic (PLEG): container finished" podID="52331cb0-aa28-4c0a-8ea7-13da876e7ef4" containerID="209a5482bc83b92e369b852883d7ec6b8e1e84ab9132f07d3405048bc18675a3" exitCode=0 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.420272 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerDied","Data":"209a5482bc83b92e369b852883d7ec6b8e1e84ab9132f07d3405048bc18675a3"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.420997 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"3a125812eb732c30c54a012a6f7dfefc79b38b1b5585cc60b34531ef62aeae2c"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.424586 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/2.log" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.425211 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/1.log" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.425330 5028 generic.go:334] "Generic (PLEG): container finished" podID="e634c65f-8585-4d5d-b929-b9e1255f8921" containerID="75d33e1dc0b68ad40438ab47e02f0cf419a600e603e43938a10adad0b49ac4a8" exitCode=2 Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.425453 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerDied","Data":"75d33e1dc0b68ad40438ab47e02f0cf419a600e603e43938a10adad0b49ac4a8"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.425501 5028 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad"} Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.426136 5028 scope.go:117] "RemoveContainer" containerID="75d33e1dc0b68ad40438ab47e02f0cf419a600e603e43938a10adad0b49ac4a8" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.496856 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.517119 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xbtxp"] Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.517219 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xbtxp"] Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.552713 5028 scope.go:117] "RemoveContainer" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.573776 5028 scope.go:117] "RemoveContainer" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.639408 5028 scope.go:117] "RemoveContainer" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.668844 5028 scope.go:117] "RemoveContainer" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.682731 5028 scope.go:117] "RemoveContainer" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.706147 5028 scope.go:117] "RemoveContainer" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.721712 5028 scope.go:117] "RemoveContainer" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.747275 5028 scope.go:117] "RemoveContainer" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.782794 5028 scope.go:117] "RemoveContainer" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.783392 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": container with ID starting with e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052 not found: ID does not exist" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.783435 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} err="failed to get container status \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": rpc error: code = NotFound desc = could not find container \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": container with ID starting with e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.783463 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.783934 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": container with ID starting with af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124 not found: ID does not exist" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.784074 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} err="failed to get container status \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": rpc error: code = NotFound desc = could not find container \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": container with ID starting with af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.784097 5028 scope.go:117] "RemoveContainer" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.784484 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": container with ID starting with 280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079 not found: ID does not exist" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.784518 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} err="failed to get container status \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": rpc error: code = NotFound desc = could not find container \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": container with ID starting with 280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.784537 5028 scope.go:117] "RemoveContainer" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.785074 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": container with ID starting with 9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568 not found: ID does not exist" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.785155 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} err="failed to get container status \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": rpc error: code = NotFound desc = could not find container \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": container with ID starting with 9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.785201 5028 scope.go:117] "RemoveContainer" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.785741 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": container with ID starting with 9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed not found: ID does not exist" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.785767 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} err="failed to get container status \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": rpc error: code = NotFound desc = could not find container \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": container with ID starting with 9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.785783 5028 scope.go:117] "RemoveContainer" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.786318 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": container with ID starting with fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248 not found: ID does not exist" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.786379 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} err="failed to get container status \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": rpc error: code = NotFound desc = could not find container \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": container with ID starting with fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.786422 5028 scope.go:117] "RemoveContainer" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.786820 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": container with ID starting with 250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1 not found: ID does not exist" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.786846 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} err="failed to get container status \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": rpc error: code = NotFound desc = could not find container \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": container with ID starting with 250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.786867 5028 scope.go:117] "RemoveContainer" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.787228 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": container with ID starting with cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9 not found: ID does not exist" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.787257 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} err="failed to get container status \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": rpc error: code = NotFound desc = could not find container \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": container with ID starting with cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.787281 5028 scope.go:117] "RemoveContainer" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.787610 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": container with ID starting with b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed not found: ID does not exist" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.787640 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} err="failed to get container status \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": rpc error: code = NotFound desc = could not find container \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": container with ID starting with b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.787661 5028 scope.go:117] "RemoveContainer" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" Nov 23 07:02:37 crc kubenswrapper[5028]: E1123 07:02:37.788042 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": container with ID starting with 1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648 not found: ID does not exist" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.788087 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} err="failed to get container status \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": rpc error: code = NotFound desc = could not find container \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": container with ID starting with 1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.788118 5028 scope.go:117] "RemoveContainer" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.788479 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} err="failed to get container status \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": rpc error: code = NotFound desc = could not find container \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": container with ID starting with e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.788507 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.789270 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} err="failed to get container status \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": rpc error: code = NotFound desc = could not find container \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": container with ID starting with af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.789312 5028 scope.go:117] "RemoveContainer" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.789648 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} err="failed to get container status \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": rpc error: code = NotFound desc = could not find container \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": container with ID starting with 280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.789682 5028 scope.go:117] "RemoveContainer" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.790041 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} err="failed to get container status \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": rpc error: code = NotFound desc = could not find container \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": container with ID starting with 9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.790074 5028 scope.go:117] "RemoveContainer" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.790470 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} err="failed to get container status \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": rpc error: code = NotFound desc = could not find container \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": container with ID starting with 9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.790521 5028 scope.go:117] "RemoveContainer" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.790969 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} err="failed to get container status \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": rpc error: code = NotFound desc = could not find container \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": container with ID starting with fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.791005 5028 scope.go:117] "RemoveContainer" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.791346 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} err="failed to get container status \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": rpc error: code = NotFound desc = could not find container \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": container with ID starting with 250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.791393 5028 scope.go:117] "RemoveContainer" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.791770 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} err="failed to get container status \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": rpc error: code = NotFound desc = could not find container \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": container with ID starting with cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.791797 5028 scope.go:117] "RemoveContainer" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792088 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} err="failed to get container status \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": rpc error: code = NotFound desc = could not find container \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": container with ID starting with b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792123 5028 scope.go:117] "RemoveContainer" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792398 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} err="failed to get container status \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": rpc error: code = NotFound desc = could not find container \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": container with ID starting with 1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792425 5028 scope.go:117] "RemoveContainer" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792693 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} err="failed to get container status \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": rpc error: code = NotFound desc = could not find container \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": container with ID starting with e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792718 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.792978 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} err="failed to get container status \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": rpc error: code = NotFound desc = could not find container \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": container with ID starting with af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793002 5028 scope.go:117] "RemoveContainer" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793231 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} err="failed to get container status \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": rpc error: code = NotFound desc = could not find container \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": container with ID starting with 280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793253 5028 scope.go:117] "RemoveContainer" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793515 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} err="failed to get container status \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": rpc error: code = NotFound desc = could not find container \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": container with ID starting with 9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793534 5028 scope.go:117] "RemoveContainer" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793792 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} err="failed to get container status \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": rpc error: code = NotFound desc = could not find container \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": container with ID starting with 9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.793832 5028 scope.go:117] "RemoveContainer" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.794240 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} err="failed to get container status \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": rpc error: code = NotFound desc = could not find container \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": container with ID starting with fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.794262 5028 scope.go:117] "RemoveContainer" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.794503 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} err="failed to get container status \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": rpc error: code = NotFound desc = could not find container \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": container with ID starting with 250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.794532 5028 scope.go:117] "RemoveContainer" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.794752 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} err="failed to get container status \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": rpc error: code = NotFound desc = could not find container \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": container with ID starting with cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.794773 5028 scope.go:117] "RemoveContainer" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.795042 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} err="failed to get container status \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": rpc error: code = NotFound desc = could not find container \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": container with ID starting with b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.795075 5028 scope.go:117] "RemoveContainer" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.795355 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} err="failed to get container status \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": rpc error: code = NotFound desc = could not find container \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": container with ID starting with 1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.795390 5028 scope.go:117] "RemoveContainer" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.795609 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} err="failed to get container status \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": rpc error: code = NotFound desc = could not find container \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": container with ID starting with e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.795627 5028 scope.go:117] "RemoveContainer" containerID="af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.796638 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124"} err="failed to get container status \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": rpc error: code = NotFound desc = could not find container \"af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124\": container with ID starting with af6f54fba933ab7635c51124ef6b3226c96f0eedc95437ddf1178009c729a124 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.796673 5028 scope.go:117] "RemoveContainer" containerID="280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797081 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079"} err="failed to get container status \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": rpc error: code = NotFound desc = could not find container \"280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079\": container with ID starting with 280783e63211bba219344f57565009b414d21797d4d494a1814b4e99681c9079 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797106 5028 scope.go:117] "RemoveContainer" containerID="9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797391 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568"} err="failed to get container status \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": rpc error: code = NotFound desc = could not find container \"9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568\": container with ID starting with 9fccf2834ec420c4eaeabaee3c9cf97d76f5e6a49a6028b305f7eea138c36568 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797429 5028 scope.go:117] "RemoveContainer" containerID="9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797701 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed"} err="failed to get container status \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": rpc error: code = NotFound desc = could not find container \"9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed\": container with ID starting with 9912c2f00852bdc5386d7994b8234e59b745f2b2ca1aee447e3b05802d07b0ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797722 5028 scope.go:117] "RemoveContainer" containerID="fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.797984 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248"} err="failed to get container status \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": rpc error: code = NotFound desc = could not find container \"fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248\": container with ID starting with fcb63e14287a4a378880774dbeed1114a7df5ac935a72fb9b01f246d70bd9248 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798012 5028 scope.go:117] "RemoveContainer" containerID="250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798238 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1"} err="failed to get container status \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": rpc error: code = NotFound desc = could not find container \"250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1\": container with ID starting with 250cafc3088afe78051fdeed296691be62aede03d3b5776b6999eeb3b68fa9d1 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798263 5028 scope.go:117] "RemoveContainer" containerID="cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798463 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9"} err="failed to get container status \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": rpc error: code = NotFound desc = could not find container \"cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9\": container with ID starting with cfc9445c7db924ba104900d0d2ff16251a94ad9361a4d133d3a4d8ee1c26adc9 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798485 5028 scope.go:117] "RemoveContainer" containerID="b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798641 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed"} err="failed to get container status \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": rpc error: code = NotFound desc = could not find container \"b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed\": container with ID starting with b393a127f3c2de6a253d0685eaa75099ebb9b87db7e56dde2158b653d5c966ed not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798660 5028 scope.go:117] "RemoveContainer" containerID="1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798809 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648"} err="failed to get container status \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": rpc error: code = NotFound desc = could not find container \"1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648\": container with ID starting with 1b75fdcc1e4a1bb42239a0d30c4ca6e369bee2a111efd299dd0c74a2e27ef648 not found: ID does not exist" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.798825 5028 scope.go:117] "RemoveContainer" containerID="e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052" Nov 23 07:02:37 crc kubenswrapper[5028]: I1123 07:02:37.799013 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052"} err="failed to get container status \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": rpc error: code = NotFound desc = could not find container \"e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052\": container with ID starting with e2cafc3c67d61833684849350beab8acc6b8625728eee2ba461c980cdedd1052 not found: ID does not exist" Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.441971 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/2.log" Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.442844 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/1.log" Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.442924 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-m2sl7" event={"ID":"e634c65f-8585-4d5d-b929-b9e1255f8921","Type":"ContainerStarted","Data":"c12884292ca39cb3c397a2867d89bcda353874dda841bab7e3784b4a7676984a"} Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.448142 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"574a406bcc076d5db8fd1d60d5a4017ea553b7d03570f7dd45a697004d5cdcba"} Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.448174 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"24a2ee4eb299045aa67a730cc699e822e49ab6f23f2b756f00efa86389c44af2"} Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.448185 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"1d07e36f9f9ee463ffac1c67fd0bf78591d61138edee538a49f2b555147b1333"} Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.448194 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"ae9409660552b03ece6a76a53fbe4b15982876249542e17acad8b9d96b73bf3f"} Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.448202 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"831037f30b64c97b521dbb196112d3813582c207a1d330c15ce75c556c5bf4c8"} Nov 23 07:02:38 crc kubenswrapper[5028]: I1123 07:02:38.448211 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"647b4756f8a470aa42808c60a86b2136c6d6e2cd3ee0c24ec7b5c27c83b00a4c"} Nov 23 07:02:39 crc kubenswrapper[5028]: I1123 07:02:39.072386 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68dc0fb8-309c-46ef-a4f8-f0eff3169061" path="/var/lib/kubelet/pods/68dc0fb8-309c-46ef-a4f8-f0eff3169061/volumes" Nov 23 07:02:41 crc kubenswrapper[5028]: I1123 07:02:41.471218 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"567405ee86c1f5c3709c6df8fdbdca5ef3bc575b08c69f61fc3581b5a27aba6c"} Nov 23 07:02:43 crc kubenswrapper[5028]: I1123 07:02:43.489548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" event={"ID":"52331cb0-aa28-4c0a-8ea7-13da876e7ef4","Type":"ContainerStarted","Data":"927850ebcaeabb0d5f1475a8ddfa94c637dc39a4e6fd8432f8886630c3bee0cc"} Nov 23 07:02:43 crc kubenswrapper[5028]: I1123 07:02:43.490291 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:43 crc kubenswrapper[5028]: I1123 07:02:43.490310 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:43 crc kubenswrapper[5028]: I1123 07:02:43.519357 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:43 crc kubenswrapper[5028]: I1123 07:02:43.520686 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" podStartSLOduration=7.520658489 podStartE2EDuration="7.520658489s" podCreationTimestamp="2025-11-23 07:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:43.518342713 +0000 UTC m=+747.215747502" watchObservedRunningTime="2025-11-23 07:02:43.520658489 +0000 UTC m=+747.218063268" Nov 23 07:02:44 crc kubenswrapper[5028]: I1123 07:02:44.494933 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:44 crc kubenswrapper[5028]: I1123 07:02:44.526435 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:02:44 crc kubenswrapper[5028]: I1123 07:02:44.777376 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tcdk5"] Nov 23 07:02:44 crc kubenswrapper[5028]: I1123 07:02:44.777762 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" containerID="cri-o://e236e3c3875d3016e6bb387a0ff4a4e44496368c8dde8b01df1aa58e463d4cf9" gracePeriod=30 Nov 23 07:02:44 crc kubenswrapper[5028]: I1123 07:02:44.862004 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4"] Nov 23 07:02:44 crc kubenswrapper[5028]: I1123 07:02:44.862240 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerName="route-controller-manager" containerID="cri-o://d30c214994c6830e46fa01fbf0ee0cded86a4dd94d5125b7392832470f16cf18" gracePeriod=30 Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.162894 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-tnq4l"] Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.163654 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.165457 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.166080 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.166084 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.166823 5028 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-t9w2b" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.171918 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tnq4l"] Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.275699 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/91b93076-449f-40db-897d-e51e37113585-crc-storage\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.275801 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkdx\" (UniqueName: \"kubernetes.io/projected/91b93076-449f-40db-897d-e51e37113585-kube-api-access-pfkdx\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.275929 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/91b93076-449f-40db-897d-e51e37113585-node-mnt\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.377457 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/91b93076-449f-40db-897d-e51e37113585-crc-storage\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.377750 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkdx\" (UniqueName: \"kubernetes.io/projected/91b93076-449f-40db-897d-e51e37113585-kube-api-access-pfkdx\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.377784 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/91b93076-449f-40db-897d-e51e37113585-node-mnt\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.378022 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/91b93076-449f-40db-897d-e51e37113585-node-mnt\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.378707 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/91b93076-449f-40db-897d-e51e37113585-crc-storage\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.407978 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkdx\" (UniqueName: \"kubernetes.io/projected/91b93076-449f-40db-897d-e51e37113585-kube-api-access-pfkdx\") pod \"crc-storage-crc-tnq4l\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: I1123 07:02:45.478603 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: E1123 07:02:45.523770 5028 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-tnq4l_crc-storage_91b93076-449f-40db-897d-e51e37113585_0(2822978c8b69c12dcb6222aa5b2f07647384fec42f66067d9f5eeaf1b5d66d9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 23 07:02:45 crc kubenswrapper[5028]: E1123 07:02:45.523840 5028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-tnq4l_crc-storage_91b93076-449f-40db-897d-e51e37113585_0(2822978c8b69c12dcb6222aa5b2f07647384fec42f66067d9f5eeaf1b5d66d9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: E1123 07:02:45.523864 5028 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-tnq4l_crc-storage_91b93076-449f-40db-897d-e51e37113585_0(2822978c8b69c12dcb6222aa5b2f07647384fec42f66067d9f5eeaf1b5d66d9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:45 crc kubenswrapper[5028]: E1123 07:02:45.523914 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-tnq4l_crc-storage(91b93076-449f-40db-897d-e51e37113585)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-tnq4l_crc-storage(91b93076-449f-40db-897d-e51e37113585)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-tnq4l_crc-storage_91b93076-449f-40db-897d-e51e37113585_0(2822978c8b69c12dcb6222aa5b2f07647384fec42f66067d9f5eeaf1b5d66d9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-tnq4l" podUID="91b93076-449f-40db-897d-e51e37113585" Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.511750 5028 generic.go:334] "Generic (PLEG): container finished" podID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerID="e236e3c3875d3016e6bb387a0ff4a4e44496368c8dde8b01df1aa58e463d4cf9" exitCode=0 Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.511848 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" event={"ID":"483a94d2-3437-4165-a8c3-6a014b2dcea4","Type":"ContainerDied","Data":"e236e3c3875d3016e6bb387a0ff4a4e44496368c8dde8b01df1aa58e463d4cf9"} Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.514427 5028 generic.go:334] "Generic (PLEG): container finished" podID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerID="d30c214994c6830e46fa01fbf0ee0cded86a4dd94d5125b7392832470f16cf18" exitCode=0 Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.514527 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.514533 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" event={"ID":"42ba6016-7bd8-4ee0-9dd9-111f320e064f","Type":"ContainerDied","Data":"d30c214994c6830e46fa01fbf0ee0cded86a4dd94d5125b7392832470f16cf18"} Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.515242 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.699493 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-tnq4l"] Nov 23 07:02:46 crc kubenswrapper[5028]: I1123 07:02:46.718540 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.043281 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.082673 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69755d58bb-j6j64"] Nov 23 07:02:47 crc kubenswrapper[5028]: E1123 07:02:47.082858 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.082871 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.082998 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" containerName="controller-manager" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.083341 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.089152 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69755d58bb-j6j64"] Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.091933 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.108593 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-client-ca\") pod \"483a94d2-3437-4165-a8c3-6a014b2dcea4\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.108703 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/483a94d2-3437-4165-a8c3-6a014b2dcea4-serving-cert\") pod \"483a94d2-3437-4165-a8c3-6a014b2dcea4\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.108752 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjwt6\" (UniqueName: \"kubernetes.io/projected/483a94d2-3437-4165-a8c3-6a014b2dcea4-kube-api-access-pjwt6\") pod \"483a94d2-3437-4165-a8c3-6a014b2dcea4\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.108809 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-config\") pod \"483a94d2-3437-4165-a8c3-6a014b2dcea4\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.108843 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-proxy-ca-bundles\") pod \"483a94d2-3437-4165-a8c3-6a014b2dcea4\" (UID: \"483a94d2-3437-4165-a8c3-6a014b2dcea4\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109080 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-proxy-ca-bundles\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109112 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tf7f\" (UniqueName: \"kubernetes.io/projected/3be58a77-793a-4222-9348-d79e7a6d5caa-kube-api-access-6tf7f\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109175 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be58a77-793a-4222-9348-d79e7a6d5caa-serving-cert\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109219 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-config\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109263 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-client-ca\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109757 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-client-ca" (OuterVolumeSpecName: "client-ca") pod "483a94d2-3437-4165-a8c3-6a014b2dcea4" (UID: "483a94d2-3437-4165-a8c3-6a014b2dcea4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.109819 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "483a94d2-3437-4165-a8c3-6a014b2dcea4" (UID: "483a94d2-3437-4165-a8c3-6a014b2dcea4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.110010 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-config" (OuterVolumeSpecName: "config") pod "483a94d2-3437-4165-a8c3-6a014b2dcea4" (UID: "483a94d2-3437-4165-a8c3-6a014b2dcea4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.119255 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483a94d2-3437-4165-a8c3-6a014b2dcea4-kube-api-access-pjwt6" (OuterVolumeSpecName: "kube-api-access-pjwt6") pod "483a94d2-3437-4165-a8c3-6a014b2dcea4" (UID: "483a94d2-3437-4165-a8c3-6a014b2dcea4"). InnerVolumeSpecName "kube-api-access-pjwt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.120631 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483a94d2-3437-4165-a8c3-6a014b2dcea4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "483a94d2-3437-4165-a8c3-6a014b2dcea4" (UID: "483a94d2-3437-4165-a8c3-6a014b2dcea4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.209975 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ba6016-7bd8-4ee0-9dd9-111f320e064f-serving-cert\") pod \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.210712 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td2p4\" (UniqueName: \"kubernetes.io/projected/42ba6016-7bd8-4ee0-9dd9-111f320e064f-kube-api-access-td2p4\") pod \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.210744 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-client-ca\") pod \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211045 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-config\") pod \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\" (UID: \"42ba6016-7bd8-4ee0-9dd9-111f320e064f\") " Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211291 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-config\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211331 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-client-ca\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211353 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-proxy-ca-bundles\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211370 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tf7f\" (UniqueName: \"kubernetes.io/projected/3be58a77-793a-4222-9348-d79e7a6d5caa-kube-api-access-6tf7f\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211414 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be58a77-793a-4222-9348-d79e7a6d5caa-serving-cert\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211455 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211466 5028 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211476 5028 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/483a94d2-3437-4165-a8c3-6a014b2dcea4-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211484 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/483a94d2-3437-4165-a8c3-6a014b2dcea4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211493 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjwt6\" (UniqueName: \"kubernetes.io/projected/483a94d2-3437-4165-a8c3-6a014b2dcea4-kube-api-access-pjwt6\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211691 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-client-ca" (OuterVolumeSpecName: "client-ca") pod "42ba6016-7bd8-4ee0-9dd9-111f320e064f" (UID: "42ba6016-7bd8-4ee0-9dd9-111f320e064f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.211721 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-config" (OuterVolumeSpecName: "config") pod "42ba6016-7bd8-4ee0-9dd9-111f320e064f" (UID: "42ba6016-7bd8-4ee0-9dd9-111f320e064f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.213198 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-config\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.213766 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-proxy-ca-bundles\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.218069 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be58a77-793a-4222-9348-d79e7a6d5caa-client-ca\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.220740 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ba6016-7bd8-4ee0-9dd9-111f320e064f-kube-api-access-td2p4" (OuterVolumeSpecName: "kube-api-access-td2p4") pod "42ba6016-7bd8-4ee0-9dd9-111f320e064f" (UID: "42ba6016-7bd8-4ee0-9dd9-111f320e064f"). InnerVolumeSpecName "kube-api-access-td2p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.220848 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be58a77-793a-4222-9348-d79e7a6d5caa-serving-cert\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.226394 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ba6016-7bd8-4ee0-9dd9-111f320e064f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "42ba6016-7bd8-4ee0-9dd9-111f320e064f" (UID: "42ba6016-7bd8-4ee0-9dd9-111f320e064f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.229157 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tf7f\" (UniqueName: \"kubernetes.io/projected/3be58a77-793a-4222-9348-d79e7a6d5caa-kube-api-access-6tf7f\") pod \"controller-manager-69755d58bb-j6j64\" (UID: \"3be58a77-793a-4222-9348-d79e7a6d5caa\") " pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.313026 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td2p4\" (UniqueName: \"kubernetes.io/projected/42ba6016-7bd8-4ee0-9dd9-111f320e064f-kube-api-access-td2p4\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.313065 5028 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.313076 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42ba6016-7bd8-4ee0-9dd9-111f320e064f-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.313085 5028 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42ba6016-7bd8-4ee0-9dd9-111f320e064f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.402685 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.532263 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tnq4l" event={"ID":"91b93076-449f-40db-897d-e51e37113585","Type":"ContainerStarted","Data":"55ecfa261bfa5a72aa9ac3dbec339ec495f8b61f37c47203d6264a3c3ed29988"} Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.535933 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" event={"ID":"483a94d2-3437-4165-a8c3-6a014b2dcea4","Type":"ContainerDied","Data":"232566c8bf23f51ed49eac5857d2568dd7bcc074c0e1c13175b80dc2e1b0713a"} Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.536035 5028 scope.go:117] "RemoveContainer" containerID="e236e3c3875d3016e6bb387a0ff4a4e44496368c8dde8b01df1aa58e463d4cf9" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.536035 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tcdk5" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.541221 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" event={"ID":"42ba6016-7bd8-4ee0-9dd9-111f320e064f","Type":"ContainerDied","Data":"c68feff1197bb7bb88b669fb311e1809bc88f1a2948425a121207fc4df9b1191"} Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.541369 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.588771 5028 scope.go:117] "RemoveContainer" containerID="d30c214994c6830e46fa01fbf0ee0cded86a4dd94d5125b7392832470f16cf18" Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.594314 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tcdk5"] Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.597304 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tcdk5"] Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.607360 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4"] Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.609698 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jjtp4"] Nov 23 07:02:47 crc kubenswrapper[5028]: I1123 07:02:47.665763 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69755d58bb-j6j64"] Nov 23 07:02:47 crc kubenswrapper[5028]: W1123 07:02:47.674794 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3be58a77_793a_4222_9348_d79e7a6d5caa.slice/crio-b0ff0bb3b11f8e58a967b5a9f640bb8963332ee634bed4f9ec2745c73979e1eb WatchSource:0}: Error finding container b0ff0bb3b11f8e58a967b5a9f640bb8963332ee634bed4f9ec2745c73979e1eb: Status 404 returned error can't find the container with id b0ff0bb3b11f8e58a967b5a9f640bb8963332ee634bed4f9ec2745c73979e1eb Nov 23 07:02:48 crc kubenswrapper[5028]: I1123 07:02:48.555945 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" event={"ID":"3be58a77-793a-4222-9348-d79e7a6d5caa","Type":"ContainerStarted","Data":"a619c4ed2d6b95d7a6b82a56a901ce1814a54c3b9a4b9bc6142d3544ad79ef4c"} Nov 23 07:02:48 crc kubenswrapper[5028]: I1123 07:02:48.556480 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" event={"ID":"3be58a77-793a-4222-9348-d79e7a6d5caa","Type":"ContainerStarted","Data":"b0ff0bb3b11f8e58a967b5a9f640bb8963332ee634bed4f9ec2745c73979e1eb"} Nov 23 07:02:48 crc kubenswrapper[5028]: I1123 07:02:48.556974 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:48 crc kubenswrapper[5028]: I1123 07:02:48.566139 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" Nov 23 07:02:48 crc kubenswrapper[5028]: I1123 07:02:48.580614 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69755d58bb-j6j64" podStartSLOduration=4.580595348 podStartE2EDuration="4.580595348s" podCreationTimestamp="2025-11-23 07:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:48.57904612 +0000 UTC m=+752.276450899" watchObservedRunningTime="2025-11-23 07:02:48.580595348 +0000 UTC m=+752.278000127" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.062062 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" path="/var/lib/kubelet/pods/42ba6016-7bd8-4ee0-9dd9-111f320e064f/volumes" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.063051 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483a94d2-3437-4165-a8c3-6a014b2dcea4" path="/var/lib/kubelet/pods/483a94d2-3437-4165-a8c3-6a014b2dcea4/volumes" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.160010 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr"] Nov 23 07:02:49 crc kubenswrapper[5028]: E1123 07:02:49.160344 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerName="route-controller-manager" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.160370 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerName="route-controller-manager" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.160559 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ba6016-7bd8-4ee0-9dd9-111f320e064f" containerName="route-controller-manager" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.161386 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.176663 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.177303 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.177314 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.177439 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.178063 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.178198 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.193216 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr"] Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.272819 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16aef3ca-7c1a-4a02-adf3-d3815510458c-serving-cert\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.272984 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aef3ca-7c1a-4a02-adf3-d3815510458c-config\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.273525 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llttv\" (UniqueName: \"kubernetes.io/projected/16aef3ca-7c1a-4a02-adf3-d3815510458c-kube-api-access-llttv\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.275304 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16aef3ca-7c1a-4a02-adf3-d3815510458c-client-ca\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.377541 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llttv\" (UniqueName: \"kubernetes.io/projected/16aef3ca-7c1a-4a02-adf3-d3815510458c-kube-api-access-llttv\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.378365 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16aef3ca-7c1a-4a02-adf3-d3815510458c-client-ca\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.378418 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16aef3ca-7c1a-4a02-adf3-d3815510458c-serving-cert\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.378453 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aef3ca-7c1a-4a02-adf3-d3815510458c-config\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.380314 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16aef3ca-7c1a-4a02-adf3-d3815510458c-client-ca\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.381726 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16aef3ca-7c1a-4a02-adf3-d3815510458c-config\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.389787 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16aef3ca-7c1a-4a02-adf3-d3815510458c-serving-cert\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.407304 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llttv\" (UniqueName: \"kubernetes.io/projected/16aef3ca-7c1a-4a02-adf3-d3815510458c-kube-api-access-llttv\") pod \"route-controller-manager-6dc744c4bf-t6dlr\" (UID: \"16aef3ca-7c1a-4a02-adf3-d3815510458c\") " pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.491926 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:49 crc kubenswrapper[5028]: I1123 07:02:49.743223 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr"] Nov 23 07:02:49 crc kubenswrapper[5028]: W1123 07:02:49.752592 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16aef3ca_7c1a_4a02_adf3_d3815510458c.slice/crio-c3b817278d3b8254c3c7893f87ce5c0608070c6703f190c3b1768890901b8604 WatchSource:0}: Error finding container c3b817278d3b8254c3c7893f87ce5c0608070c6703f190c3b1768890901b8604: Status 404 returned error can't find the container with id c3b817278d3b8254c3c7893f87ce5c0608070c6703f190c3b1768890901b8604 Nov 23 07:02:50 crc kubenswrapper[5028]: I1123 07:02:50.572600 5028 generic.go:334] "Generic (PLEG): container finished" podID="91b93076-449f-40db-897d-e51e37113585" containerID="8357acbacaf85fadfd8a37d6b926c867ccb87c119194a8b6cac49175ac1bb44c" exitCode=0 Nov 23 07:02:50 crc kubenswrapper[5028]: I1123 07:02:50.572759 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tnq4l" event={"ID":"91b93076-449f-40db-897d-e51e37113585","Type":"ContainerDied","Data":"8357acbacaf85fadfd8a37d6b926c867ccb87c119194a8b6cac49175ac1bb44c"} Nov 23 07:02:50 crc kubenswrapper[5028]: I1123 07:02:50.575525 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" event={"ID":"16aef3ca-7c1a-4a02-adf3-d3815510458c","Type":"ContainerStarted","Data":"5940c27675f2b442d05681dabe800f2b1e56fabe62b249e9105b59197153eb85"} Nov 23 07:02:50 crc kubenswrapper[5028]: I1123 07:02:50.575572 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" event={"ID":"16aef3ca-7c1a-4a02-adf3-d3815510458c","Type":"ContainerStarted","Data":"c3b817278d3b8254c3c7893f87ce5c0608070c6703f190c3b1768890901b8604"} Nov 23 07:02:50 crc kubenswrapper[5028]: I1123 07:02:50.619920 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" podStartSLOduration=6.619877414 podStartE2EDuration="6.619877414s" podCreationTimestamp="2025-11-23 07:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:02:50.618777167 +0000 UTC m=+754.316181946" watchObservedRunningTime="2025-11-23 07:02:50.619877414 +0000 UTC m=+754.317282233" Nov 23 07:02:51 crc kubenswrapper[5028]: I1123 07:02:51.584117 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:51 crc kubenswrapper[5028]: I1123 07:02:51.593741 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dc744c4bf-t6dlr" Nov 23 07:02:51 crc kubenswrapper[5028]: I1123 07:02:51.949904 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.021459 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/91b93076-449f-40db-897d-e51e37113585-node-mnt\") pod \"91b93076-449f-40db-897d-e51e37113585\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.021555 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/91b93076-449f-40db-897d-e51e37113585-crc-storage\") pod \"91b93076-449f-40db-897d-e51e37113585\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.021642 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkdx\" (UniqueName: \"kubernetes.io/projected/91b93076-449f-40db-897d-e51e37113585-kube-api-access-pfkdx\") pod \"91b93076-449f-40db-897d-e51e37113585\" (UID: \"91b93076-449f-40db-897d-e51e37113585\") " Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.022139 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91b93076-449f-40db-897d-e51e37113585-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "91b93076-449f-40db-897d-e51e37113585" (UID: "91b93076-449f-40db-897d-e51e37113585"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.031746 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b93076-449f-40db-897d-e51e37113585-kube-api-access-pfkdx" (OuterVolumeSpecName: "kube-api-access-pfkdx") pod "91b93076-449f-40db-897d-e51e37113585" (UID: "91b93076-449f-40db-897d-e51e37113585"). InnerVolumeSpecName "kube-api-access-pfkdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.049601 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b93076-449f-40db-897d-e51e37113585-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "91b93076-449f-40db-897d-e51e37113585" (UID: "91b93076-449f-40db-897d-e51e37113585"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.123172 5028 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/91b93076-449f-40db-897d-e51e37113585-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.123209 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkdx\" (UniqueName: \"kubernetes.io/projected/91b93076-449f-40db-897d-e51e37113585-kube-api-access-pfkdx\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.123220 5028 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/91b93076-449f-40db-897d-e51e37113585-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.504151 5028 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.592973 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-tnq4l" event={"ID":"91b93076-449f-40db-897d-e51e37113585","Type":"ContainerDied","Data":"55ecfa261bfa5a72aa9ac3dbec339ec495f8b61f37c47203d6264a3c3ed29988"} Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.593048 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55ecfa261bfa5a72aa9ac3dbec339ec495f8b61f37c47203d6264a3c3ed29988" Nov 23 07:02:52 crc kubenswrapper[5028]: I1123 07:02:52.592992 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-tnq4l" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.855510 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9"] Nov 23 07:02:59 crc kubenswrapper[5028]: E1123 07:02:59.856765 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b93076-449f-40db-897d-e51e37113585" containerName="storage" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.856785 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b93076-449f-40db-897d-e51e37113585" containerName="storage" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.856928 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b93076-449f-40db-897d-e51e37113585" containerName="storage" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.858023 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.864178 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.876372 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9"] Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.953081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.953147 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44sw5\" (UniqueName: \"kubernetes.io/projected/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-kube-api-access-44sw5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:02:59 crc kubenswrapper[5028]: I1123 07:02:59.953180 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.055111 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44sw5\" (UniqueName: \"kubernetes.io/projected/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-kube-api-access-44sw5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.055201 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.055314 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.056051 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.056168 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.082369 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44sw5\" (UniqueName: \"kubernetes.io/projected/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-kube-api-access-44sw5\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.183370 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.623212 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9"] Nov 23 07:03:00 crc kubenswrapper[5028]: I1123 07:03:00.665636 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" event={"ID":"7db5854f-5ab0-47a7-8e9a-cedb69a5d922","Type":"ContainerStarted","Data":"8cebe4fb963b2a18ef8eda223f1276cf892e121ba8521cd3d7a79294512e887d"} Nov 23 07:03:01 crc kubenswrapper[5028]: I1123 07:03:01.675707 5028 generic.go:334] "Generic (PLEG): container finished" podID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerID="51fbc901b8c2cba9965dc0c47c961961f82065d0079181d735e41a2aff44089a" exitCode=0 Nov 23 07:03:01 crc kubenswrapper[5028]: I1123 07:03:01.675809 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" event={"ID":"7db5854f-5ab0-47a7-8e9a-cedb69a5d922","Type":"ContainerDied","Data":"51fbc901b8c2cba9965dc0c47c961961f82065d0079181d735e41a2aff44089a"} Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.116824 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pt4wt"] Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.118830 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.134053 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt4wt"] Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.194546 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-catalog-content\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.194621 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-utilities\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.194661 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxft\" (UniqueName: \"kubernetes.io/projected/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-kube-api-access-mwxft\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.296226 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-utilities\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.296359 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxft\" (UniqueName: \"kubernetes.io/projected/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-kube-api-access-mwxft\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.296505 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-catalog-content\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.296833 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-utilities\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.297345 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-catalog-content\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.329290 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxft\" (UniqueName: \"kubernetes.io/projected/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-kube-api-access-mwxft\") pod \"redhat-operators-pt4wt\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.474003 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:02 crc kubenswrapper[5028]: I1123 07:03:02.924666 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt4wt"] Nov 23 07:03:03 crc kubenswrapper[5028]: I1123 07:03:03.691825 5028 generic.go:334] "Generic (PLEG): container finished" podID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerID="b1379f5359ed42d9ccc9c73caff33fa443ea3b8a11665d8ad291c46cc9a34dce" exitCode=0 Nov 23 07:03:03 crc kubenswrapper[5028]: I1123 07:03:03.691971 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" event={"ID":"7db5854f-5ab0-47a7-8e9a-cedb69a5d922","Type":"ContainerDied","Data":"b1379f5359ed42d9ccc9c73caff33fa443ea3b8a11665d8ad291c46cc9a34dce"} Nov 23 07:03:03 crc kubenswrapper[5028]: I1123 07:03:03.694357 5028 generic.go:334] "Generic (PLEG): container finished" podID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerID="c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0" exitCode=0 Nov 23 07:03:03 crc kubenswrapper[5028]: I1123 07:03:03.694418 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerDied","Data":"c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0"} Nov 23 07:03:03 crc kubenswrapper[5028]: I1123 07:03:03.694475 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerStarted","Data":"a64fe3ee72dfdbfaf93405652c1c9bf2d5aa15243efbb1abf211104726e27e4a"} Nov 23 07:03:04 crc kubenswrapper[5028]: I1123 07:03:04.705407 5028 generic.go:334] "Generic (PLEG): container finished" podID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerID="0ca14df6a998ea7ed12fe33e57cc5e1ec70219ca1048a702b8e0a1ee323b9dbf" exitCode=0 Nov 23 07:03:04 crc kubenswrapper[5028]: I1123 07:03:04.705457 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" event={"ID":"7db5854f-5ab0-47a7-8e9a-cedb69a5d922","Type":"ContainerDied","Data":"0ca14df6a998ea7ed12fe33e57cc5e1ec70219ca1048a702b8e0a1ee323b9dbf"} Nov 23 07:03:05 crc kubenswrapper[5028]: I1123 07:03:05.715464 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerStarted","Data":"219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382"} Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.080537 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.191486 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44sw5\" (UniqueName: \"kubernetes.io/projected/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-kube-api-access-44sw5\") pod \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.191601 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-bundle\") pod \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.191666 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-util\") pod \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\" (UID: \"7db5854f-5ab0-47a7-8e9a-cedb69a5d922\") " Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.192431 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-bundle" (OuterVolumeSpecName: "bundle") pod "7db5854f-5ab0-47a7-8e9a-cedb69a5d922" (UID: "7db5854f-5ab0-47a7-8e9a-cedb69a5d922"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.204442 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-kube-api-access-44sw5" (OuterVolumeSpecName: "kube-api-access-44sw5") pod "7db5854f-5ab0-47a7-8e9a-cedb69a5d922" (UID: "7db5854f-5ab0-47a7-8e9a-cedb69a5d922"). InnerVolumeSpecName "kube-api-access-44sw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.292966 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44sw5\" (UniqueName: \"kubernetes.io/projected/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-kube-api-access-44sw5\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.293020 5028 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.728803 5028 generic.go:334] "Generic (PLEG): container finished" podID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerID="219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382" exitCode=0 Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.728903 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerDied","Data":"219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382"} Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.732561 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" event={"ID":"7db5854f-5ab0-47a7-8e9a-cedb69a5d922","Type":"ContainerDied","Data":"8cebe4fb963b2a18ef8eda223f1276cf892e121ba8521cd3d7a79294512e887d"} Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.732639 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cebe4fb963b2a18ef8eda223f1276cf892e121ba8521cd3d7a79294512e887d" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.732640 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9" Nov 23 07:03:06 crc kubenswrapper[5028]: I1123 07:03:06.932181 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-util" (OuterVolumeSpecName: "util") pod "7db5854f-5ab0-47a7-8e9a-cedb69a5d922" (UID: "7db5854f-5ab0-47a7-8e9a-cedb69a5d922"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:07 crc kubenswrapper[5028]: I1123 07:03:07.006514 5028 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7db5854f-5ab0-47a7-8e9a-cedb69a5d922-util\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:07 crc kubenswrapper[5028]: I1123 07:03:07.194156 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hrg5d" Nov 23 07:03:08 crc kubenswrapper[5028]: I1123 07:03:08.747017 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerStarted","Data":"890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04"} Nov 23 07:03:08 crc kubenswrapper[5028]: I1123 07:03:08.773495 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pt4wt" podStartSLOduration=2.717730241 podStartE2EDuration="6.77347032s" podCreationTimestamp="2025-11-23 07:03:02 +0000 UTC" firstStartedPulling="2025-11-23 07:03:03.696033107 +0000 UTC m=+767.393437886" lastFinishedPulling="2025-11-23 07:03:07.751773126 +0000 UTC m=+771.449177965" observedRunningTime="2025-11-23 07:03:08.772778443 +0000 UTC m=+772.470183222" watchObservedRunningTime="2025-11-23 07:03:08.77347032 +0000 UTC m=+772.470875099" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.605700 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-lhgb7"] Nov 23 07:03:10 crc kubenswrapper[5028]: E1123 07:03:10.606496 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="util" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.606515 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="util" Nov 23 07:03:10 crc kubenswrapper[5028]: E1123 07:03:10.606542 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="pull" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.606553 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="pull" Nov 23 07:03:10 crc kubenswrapper[5028]: E1123 07:03:10.606588 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="extract" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.606606 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="extract" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.606758 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db5854f-5ab0-47a7-8e9a-cedb69a5d922" containerName="extract" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.607344 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.610380 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.612472 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.615972 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-hfprm" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.641114 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-lhgb7"] Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.667067 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6h4\" (UniqueName: \"kubernetes.io/projected/82ba86b5-c6a3-441d-a770-3c8ee2963240-kube-api-access-7c6h4\") pod \"nmstate-operator-557fdffb88-lhgb7\" (UID: \"82ba86b5-c6a3-441d-a770-3c8ee2963240\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.768916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c6h4\" (UniqueName: \"kubernetes.io/projected/82ba86b5-c6a3-441d-a770-3c8ee2963240-kube-api-access-7c6h4\") pod \"nmstate-operator-557fdffb88-lhgb7\" (UID: \"82ba86b5-c6a3-441d-a770-3c8ee2963240\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.789625 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c6h4\" (UniqueName: \"kubernetes.io/projected/82ba86b5-c6a3-441d-a770-3c8ee2963240-kube-api-access-7c6h4\") pod \"nmstate-operator-557fdffb88-lhgb7\" (UID: \"82ba86b5-c6a3-441d-a770-3c8ee2963240\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" Nov 23 07:03:10 crc kubenswrapper[5028]: I1123 07:03:10.956663 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" Nov 23 07:03:11 crc kubenswrapper[5028]: I1123 07:03:11.461568 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-lhgb7"] Nov 23 07:03:11 crc kubenswrapper[5028]: W1123 07:03:11.462922 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ba86b5_c6a3_441d_a770_3c8ee2963240.slice/crio-301108bfa58860292f47c0e6f221c71eb50dbfea7686bae5f40904b2ff304d82 WatchSource:0}: Error finding container 301108bfa58860292f47c0e6f221c71eb50dbfea7686bae5f40904b2ff304d82: Status 404 returned error can't find the container with id 301108bfa58860292f47c0e6f221c71eb50dbfea7686bae5f40904b2ff304d82 Nov 23 07:03:11 crc kubenswrapper[5028]: I1123 07:03:11.768746 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" event={"ID":"82ba86b5-c6a3-441d-a770-3c8ee2963240","Type":"ContainerStarted","Data":"301108bfa58860292f47c0e6f221c71eb50dbfea7686bae5f40904b2ff304d82"} Nov 23 07:03:12 crc kubenswrapper[5028]: I1123 07:03:12.474840 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:12 crc kubenswrapper[5028]: I1123 07:03:12.475334 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:13 crc kubenswrapper[5028]: I1123 07:03:13.537671 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pt4wt" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="registry-server" probeResult="failure" output=< Nov 23 07:03:13 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 07:03:13 crc kubenswrapper[5028]: > Nov 23 07:03:15 crc kubenswrapper[5028]: I1123 07:03:15.798198 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" event={"ID":"82ba86b5-c6a3-441d-a770-3c8ee2963240","Type":"ContainerStarted","Data":"e6f44a858c51d0fab0915e2887f8260bc700f1f84c8b85456704a85573306981"} Nov 23 07:03:15 crc kubenswrapper[5028]: I1123 07:03:15.825553 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-lhgb7" podStartSLOduration=2.235565573 podStartE2EDuration="5.82552725s" podCreationTimestamp="2025-11-23 07:03:10 +0000 UTC" firstStartedPulling="2025-11-23 07:03:11.465542707 +0000 UTC m=+775.162947486" lastFinishedPulling="2025-11-23 07:03:15.055504374 +0000 UTC m=+778.752909163" observedRunningTime="2025-11-23 07:03:15.823698435 +0000 UTC m=+779.521103244" watchObservedRunningTime="2025-11-23 07:03:15.82552725 +0000 UTC m=+779.522932029" Nov 23 07:03:17 crc kubenswrapper[5028]: I1123 07:03:17.240657 5028 scope.go:117] "RemoveContainer" containerID="f1817b8922b3522de81479810449fa6dd034feb3162d991d65844aebc644c1ad" Nov 23 07:03:17 crc kubenswrapper[5028]: I1123 07:03:17.815184 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-m2sl7_e634c65f-8585-4d5d-b929-b9e1255f8921/kube-multus/2.log" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.019844 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.022510 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.026926 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n96m2" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.037624 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.046845 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.047690 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.049333 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.054191 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4r98v"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.054830 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.080916 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122676 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vc2\" (UniqueName: \"kubernetes.io/projected/e5b8003c-1787-4f3b-9caa-d03c42d00c24-kube-api-access-g6vc2\") pod \"nmstate-webhook-6b89b748d8-g2bqd\" (UID: \"e5b8003c-1787-4f3b-9caa-d03c42d00c24\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122742 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-nmstate-lock\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122815 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-dbus-socket\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122839 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-ovs-socket\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122892 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgz9\" (UniqueName: \"kubernetes.io/projected/21e96d6a-d3d8-4132-8fb0-522d64110450-kube-api-access-wfgz9\") pod \"nmstate-metrics-5dcf9c57c5-8jph2\" (UID: \"21e96d6a-d3d8-4132-8fb0-522d64110450\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122920 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9vp6\" (UniqueName: \"kubernetes.io/projected/c397b21c-2367-4d08-8e3d-85e2c03afdc8-kube-api-access-p9vp6\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.122969 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e5b8003c-1787-4f3b-9caa-d03c42d00c24-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-g2bqd\" (UID: \"e5b8003c-1787-4f3b-9caa-d03c42d00c24\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.178459 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.179112 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.184434 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.184743 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.184884 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qhzms" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.204346 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224197 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgz9\" (UniqueName: \"kubernetes.io/projected/21e96d6a-d3d8-4132-8fb0-522d64110450-kube-api-access-wfgz9\") pod \"nmstate-metrics-5dcf9c57c5-8jph2\" (UID: \"21e96d6a-d3d8-4132-8fb0-522d64110450\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224256 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9vp6\" (UniqueName: \"kubernetes.io/projected/c397b21c-2367-4d08-8e3d-85e2c03afdc8-kube-api-access-p9vp6\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224290 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e5b8003c-1787-4f3b-9caa-d03c42d00c24-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-g2bqd\" (UID: \"e5b8003c-1787-4f3b-9caa-d03c42d00c24\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224309 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vc2\" (UniqueName: \"kubernetes.io/projected/e5b8003c-1787-4f3b-9caa-d03c42d00c24-kube-api-access-g6vc2\") pod \"nmstate-webhook-6b89b748d8-g2bqd\" (UID: \"e5b8003c-1787-4f3b-9caa-d03c42d00c24\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224345 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-nmstate-lock\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224391 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8djx\" (UniqueName: \"kubernetes.io/projected/425559e3-e955-4772-9ee7-b025d565655a-kube-api-access-q8djx\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224413 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/425559e3-e955-4772-9ee7-b025d565655a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224449 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-dbus-socket\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224469 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-ovs-socket\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224487 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/425559e3-e955-4772-9ee7-b025d565655a-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.224894 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-nmstate-lock\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.225014 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-ovs-socket\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.225095 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c397b21c-2367-4d08-8e3d-85e2c03afdc8-dbus-socket\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.229756 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e5b8003c-1787-4f3b-9caa-d03c42d00c24-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-g2bqd\" (UID: \"e5b8003c-1787-4f3b-9caa-d03c42d00c24\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.243005 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9vp6\" (UniqueName: \"kubernetes.io/projected/c397b21c-2367-4d08-8e3d-85e2c03afdc8-kube-api-access-p9vp6\") pod \"nmstate-handler-4r98v\" (UID: \"c397b21c-2367-4d08-8e3d-85e2c03afdc8\") " pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.243163 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vc2\" (UniqueName: \"kubernetes.io/projected/e5b8003c-1787-4f3b-9caa-d03c42d00c24-kube-api-access-g6vc2\") pod \"nmstate-webhook-6b89b748d8-g2bqd\" (UID: \"e5b8003c-1787-4f3b-9caa-d03c42d00c24\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.251989 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgz9\" (UniqueName: \"kubernetes.io/projected/21e96d6a-d3d8-4132-8fb0-522d64110450-kube-api-access-wfgz9\") pod \"nmstate-metrics-5dcf9c57c5-8jph2\" (UID: \"21e96d6a-d3d8-4132-8fb0-522d64110450\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.325706 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/425559e3-e955-4772-9ee7-b025d565655a-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.325794 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8djx\" (UniqueName: \"kubernetes.io/projected/425559e3-e955-4772-9ee7-b025d565655a-kube-api-access-q8djx\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.325816 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/425559e3-e955-4772-9ee7-b025d565655a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: E1123 07:03:20.325980 5028 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 23 07:03:20 crc kubenswrapper[5028]: E1123 07:03:20.326031 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425559e3-e955-4772-9ee7-b025d565655a-plugin-serving-cert podName:425559e3-e955-4772-9ee7-b025d565655a nodeName:}" failed. No retries permitted until 2025-11-23 07:03:20.826013811 +0000 UTC m=+784.523418590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/425559e3-e955-4772-9ee7-b025d565655a-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-mvh2c" (UID: "425559e3-e955-4772-9ee7-b025d565655a") : secret "plugin-serving-cert" not found Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.327056 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/425559e3-e955-4772-9ee7-b025d565655a-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.342241 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.344445 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8djx\" (UniqueName: \"kubernetes.io/projected/425559e3-e955-4772-9ee7-b025d565655a-kube-api-access-q8djx\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.373235 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.388894 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.394152 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5488f47b75-txxql"] Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.394891 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.426694 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5488f47b75-txxql"] Nov 23 07:03:20 crc kubenswrapper[5028]: W1123 07:03:20.457163 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc397b21c_2367_4d08_8e3d_85e2c03afdc8.slice/crio-1422529b74f9f6a8e118074dbb31b3cbd26d10a57a0a4433c07e3a1f388fd335 WatchSource:0}: Error finding container 1422529b74f9f6a8e118074dbb31b3cbd26d10a57a0a4433c07e3a1f388fd335: Status 404 returned error can't find the container with id 1422529b74f9f6a8e118074dbb31b3cbd26d10a57a0a4433c07e3a1f388fd335 Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.527690 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-trusted-ca-bundle\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.527977 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhhn\" (UniqueName: \"kubernetes.io/projected/c08463b4-ffff-4023-b916-7e81c5688b8e-kube-api-access-qqhhn\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.528025 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-service-ca\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.528048 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-console-config\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.528213 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c08463b4-ffff-4023-b916-7e81c5688b8e-console-serving-cert\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.528247 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c08463b4-ffff-4023-b916-7e81c5688b8e-console-oauth-config\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.528285 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-oauth-serving-cert\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630151 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-trusted-ca-bundle\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630213 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhhn\" (UniqueName: \"kubernetes.io/projected/c08463b4-ffff-4023-b916-7e81c5688b8e-kube-api-access-qqhhn\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630270 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-service-ca\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630302 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-console-config\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630332 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c08463b4-ffff-4023-b916-7e81c5688b8e-console-serving-cert\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630352 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c08463b4-ffff-4023-b916-7e81c5688b8e-console-oauth-config\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.630374 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-oauth-serving-cert\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.631497 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-oauth-serving-cert\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.631507 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-console-config\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.631669 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-service-ca\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.632058 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08463b4-ffff-4023-b916-7e81c5688b8e-trusted-ca-bundle\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.643671 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c08463b4-ffff-4023-b916-7e81c5688b8e-console-serving-cert\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.644881 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c08463b4-ffff-4023-b916-7e81c5688b8e-console-oauth-config\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.648718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhhn\" (UniqueName: \"kubernetes.io/projected/c08463b4-ffff-4023-b916-7e81c5688b8e-kube-api-access-qqhhn\") pod \"console-5488f47b75-txxql\" (UID: \"c08463b4-ffff-4023-b916-7e81c5688b8e\") " pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.745162 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.817983 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2"] Nov 23 07:03:20 crc kubenswrapper[5028]: W1123 07:03:20.823885 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e96d6a_d3d8_4132_8fb0_522d64110450.slice/crio-8e918b556893c411a00c6dc38ae9a65ce39c79b1d87c5afb8c26dd05f6e50fa9 WatchSource:0}: Error finding container 8e918b556893c411a00c6dc38ae9a65ce39c79b1d87c5afb8c26dd05f6e50fa9: Status 404 returned error can't find the container with id 8e918b556893c411a00c6dc38ae9a65ce39c79b1d87c5afb8c26dd05f6e50fa9 Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.832808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/425559e3-e955-4772-9ee7-b025d565655a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.834826 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" event={"ID":"21e96d6a-d3d8-4132-8fb0-522d64110450","Type":"ContainerStarted","Data":"8e918b556893c411a00c6dc38ae9a65ce39c79b1d87c5afb8c26dd05f6e50fa9"} Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.835970 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4r98v" event={"ID":"c397b21c-2367-4d08-8e3d-85e2c03afdc8","Type":"ContainerStarted","Data":"1422529b74f9f6a8e118074dbb31b3cbd26d10a57a0a4433c07e3a1f388fd335"} Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.836401 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/425559e3-e955-4772-9ee7-b025d565655a-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-mvh2c\" (UID: \"425559e3-e955-4772-9ee7-b025d565655a\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:20 crc kubenswrapper[5028]: I1123 07:03:20.911028 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd"] Nov 23 07:03:20 crc kubenswrapper[5028]: W1123 07:03:20.917514 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5b8003c_1787_4f3b_9caa_d03c42d00c24.slice/crio-0ffedb578c1af796e328dcd73a808dd47878de30fd19f5a60ca2c108358186d5 WatchSource:0}: Error finding container 0ffedb578c1af796e328dcd73a808dd47878de30fd19f5a60ca2c108358186d5: Status 404 returned error can't find the container with id 0ffedb578c1af796e328dcd73a808dd47878de30fd19f5a60ca2c108358186d5 Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.098025 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.117403 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5488f47b75-txxql"] Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.538967 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c"] Nov 23 07:03:21 crc kubenswrapper[5028]: W1123 07:03:21.547159 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod425559e3_e955_4772_9ee7_b025d565655a.slice/crio-524c9885d15acf248204759a8ba6b0cbc58afe3132495db3cddfb92dee39593a WatchSource:0}: Error finding container 524c9885d15acf248204759a8ba6b0cbc58afe3132495db3cddfb92dee39593a: Status 404 returned error can't find the container with id 524c9885d15acf248204759a8ba6b0cbc58afe3132495db3cddfb92dee39593a Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.844933 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" event={"ID":"425559e3-e955-4772-9ee7-b025d565655a","Type":"ContainerStarted","Data":"524c9885d15acf248204759a8ba6b0cbc58afe3132495db3cddfb92dee39593a"} Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.847560 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5488f47b75-txxql" event={"ID":"c08463b4-ffff-4023-b916-7e81c5688b8e","Type":"ContainerStarted","Data":"ea2faa6bdc83e0a5847458d023f6897a6fba324d6613ebfcd85dd76c5b8453ce"} Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.847614 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5488f47b75-txxql" event={"ID":"c08463b4-ffff-4023-b916-7e81c5688b8e","Type":"ContainerStarted","Data":"ab9b94746f70a523156013862dfaa380d5f17abd1ce8fd82cf3cffb63f2e4fb5"} Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.850220 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" event={"ID":"e5b8003c-1787-4f3b-9caa-d03c42d00c24","Type":"ContainerStarted","Data":"0ffedb578c1af796e328dcd73a808dd47878de30fd19f5a60ca2c108358186d5"} Nov 23 07:03:21 crc kubenswrapper[5028]: I1123 07:03:21.864208 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5488f47b75-txxql" podStartSLOduration=1.8641897269999999 podStartE2EDuration="1.864189727s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:03:21.863464569 +0000 UTC m=+785.560869348" watchObservedRunningTime="2025-11-23 07:03:21.864189727 +0000 UTC m=+785.561594506" Nov 23 07:03:22 crc kubenswrapper[5028]: I1123 07:03:22.517445 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:22 crc kubenswrapper[5028]: I1123 07:03:22.561395 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.862894 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" event={"ID":"e5b8003c-1787-4f3b-9caa-d03c42d00c24","Type":"ContainerStarted","Data":"3bae0d9bb52a904046a525f827f2e9a1c32491b1a2b7c9814730d00ade7b40be"} Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.864062 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.865396 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4r98v" event={"ID":"c397b21c-2367-4d08-8e3d-85e2c03afdc8","Type":"ContainerStarted","Data":"4e87a943f78e9b001c35ca5d2fe1e9d2e5579172a5ea4203b6328742ae2dddc9"} Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.865897 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.867456 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" event={"ID":"21e96d6a-d3d8-4132-8fb0-522d64110450","Type":"ContainerStarted","Data":"0643a9ef83429591ef806a675ed9198e0180e95e60171abb085c1bb7548350b8"} Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.900174 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" podStartSLOduration=1.960120355 podStartE2EDuration="3.900135593s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="2025-11-23 07:03:20.919845781 +0000 UTC m=+784.617250560" lastFinishedPulling="2025-11-23 07:03:22.859861029 +0000 UTC m=+786.557265798" observedRunningTime="2025-11-23 07:03:23.881511701 +0000 UTC m=+787.578916480" watchObservedRunningTime="2025-11-23 07:03:23.900135593 +0000 UTC m=+787.597540382" Nov 23 07:03:23 crc kubenswrapper[5028]: I1123 07:03:23.901256 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4r98v" podStartSLOduration=1.5742699519999999 podStartE2EDuration="3.9012464s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="2025-11-23 07:03:20.459584712 +0000 UTC m=+784.156989491" lastFinishedPulling="2025-11-23 07:03:22.78656116 +0000 UTC m=+786.483965939" observedRunningTime="2025-11-23 07:03:23.896622018 +0000 UTC m=+787.594026797" watchObservedRunningTime="2025-11-23 07:03:23.9012464 +0000 UTC m=+787.598651189" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.096588 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt4wt"] Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.097251 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pt4wt" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="registry-server" containerID="cri-o://890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04" gracePeriod=2 Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.867857 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.901715 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" event={"ID":"425559e3-e955-4772-9ee7-b025d565655a","Type":"ContainerStarted","Data":"4440592bbbc7f2183f2becbdd6da3f8e30b81ab8f7fdeb5cbaf48ed6dfe22057"} Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.907134 5028 generic.go:334] "Generic (PLEG): container finished" podID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerID="890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04" exitCode=0 Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.907341 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerDied","Data":"890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04"} Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.907402 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt4wt" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.907449 5028 scope.go:117] "RemoveContainer" containerID="890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.907425 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt4wt" event={"ID":"6ed8bf71-bcce-40fb-9606-9bd8956e67e0","Type":"ContainerDied","Data":"a64fe3ee72dfdbfaf93405652c1c9bf2d5aa15243efbb1abf211104726e27e4a"} Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.913182 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-catalog-content\") pod \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.913218 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxft\" (UniqueName: \"kubernetes.io/projected/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-kube-api-access-mwxft\") pod \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.913297 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-utilities\") pod \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\" (UID: \"6ed8bf71-bcce-40fb-9606-9bd8956e67e0\") " Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.914624 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-utilities" (OuterVolumeSpecName: "utilities") pod "6ed8bf71-bcce-40fb-9606-9bd8956e67e0" (UID: "6ed8bf71-bcce-40fb-9606-9bd8956e67e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.932583 5028 scope.go:117] "RemoveContainer" containerID="219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.933967 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-kube-api-access-mwxft" (OuterVolumeSpecName: "kube-api-access-mwxft") pod "6ed8bf71-bcce-40fb-9606-9bd8956e67e0" (UID: "6ed8bf71-bcce-40fb-9606-9bd8956e67e0"). InnerVolumeSpecName "kube-api-access-mwxft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.982476 5028 scope.go:117] "RemoveContainer" containerID="c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.997269 5028 scope.go:117] "RemoveContainer" containerID="890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04" Nov 23 07:03:25 crc kubenswrapper[5028]: E1123 07:03:25.997678 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04\": container with ID starting with 890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04 not found: ID does not exist" containerID="890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.997723 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04"} err="failed to get container status \"890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04\": rpc error: code = NotFound desc = could not find container \"890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04\": container with ID starting with 890841a0c2a1c2abf66c8e1abf01178d4d5db703ddec1843b8ae714974a7fd04 not found: ID does not exist" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.997750 5028 scope.go:117] "RemoveContainer" containerID="219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382" Nov 23 07:03:25 crc kubenswrapper[5028]: E1123 07:03:25.998098 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382\": container with ID starting with 219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382 not found: ID does not exist" containerID="219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.998171 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382"} err="failed to get container status \"219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382\": rpc error: code = NotFound desc = could not find container \"219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382\": container with ID starting with 219b309cf4a9480b338ac93f1dcd2e5b8084d1e87542493f914ca0e7549c3382 not found: ID does not exist" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.998254 5028 scope.go:117] "RemoveContainer" containerID="c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0" Nov 23 07:03:25 crc kubenswrapper[5028]: E1123 07:03:25.998714 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0\": container with ID starting with c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0 not found: ID does not exist" containerID="c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0" Nov 23 07:03:25 crc kubenswrapper[5028]: I1123 07:03:25.998747 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0"} err="failed to get container status \"c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0\": rpc error: code = NotFound desc = could not find container \"c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0\": container with ID starting with c4506c1b062f807c551afa80ba0d1ad7c691cca1ba353fa248207293900479b0 not found: ID does not exist" Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.015168 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxft\" (UniqueName: \"kubernetes.io/projected/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-kube-api-access-mwxft\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.015209 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.032782 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ed8bf71-bcce-40fb-9606-9bd8956e67e0" (UID: "6ed8bf71-bcce-40fb-9606-9bd8956e67e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.116751 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ed8bf71-bcce-40fb-9606-9bd8956e67e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.244110 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-mvh2c" podStartSLOduration=3.101756847 podStartE2EDuration="6.244061932s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="2025-11-23 07:03:21.550051303 +0000 UTC m=+785.247456082" lastFinishedPulling="2025-11-23 07:03:24.692356388 +0000 UTC m=+788.389761167" observedRunningTime="2025-11-23 07:03:25.933913126 +0000 UTC m=+789.631317905" watchObservedRunningTime="2025-11-23 07:03:26.244061932 +0000 UTC m=+789.941466711" Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.249266 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt4wt"] Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.253022 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pt4wt"] Nov 23 07:03:26 crc kubenswrapper[5028]: I1123 07:03:26.920171 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" event={"ID":"21e96d6a-d3d8-4132-8fb0-522d64110450","Type":"ContainerStarted","Data":"382726f14632a0ec5e09f99f5482f003e0e342276ea2537b8386a3f7eab33ef4"} Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.068447 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" path="/var/lib/kubelet/pods/6ed8bf71-bcce-40fb-9606-9bd8956e67e0/volumes" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.714579 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-8jph2" podStartSLOduration=2.860079205 podStartE2EDuration="7.714522706s" podCreationTimestamp="2025-11-23 07:03:20 +0000 UTC" firstStartedPulling="2025-11-23 07:03:20.82625429 +0000 UTC m=+784.523659069" lastFinishedPulling="2025-11-23 07:03:25.680697791 +0000 UTC m=+789.378102570" observedRunningTime="2025-11-23 07:03:26.946235782 +0000 UTC m=+790.643640561" watchObservedRunningTime="2025-11-23 07:03:27.714522706 +0000 UTC m=+791.411927525" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.722477 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ctmnx"] Nov 23 07:03:27 crc kubenswrapper[5028]: E1123 07:03:27.722916 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="extract-content" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.722972 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="extract-content" Nov 23 07:03:27 crc kubenswrapper[5028]: E1123 07:03:27.723000 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="extract-utilities" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.723016 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="extract-utilities" Nov 23 07:03:27 crc kubenswrapper[5028]: E1123 07:03:27.723043 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="registry-server" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.723055 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="registry-server" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.723301 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed8bf71-bcce-40fb-9606-9bd8956e67e0" containerName="registry-server" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.724764 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.812635 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctmnx"] Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.847005 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-catalog-content\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.847190 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-utilities\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.847266 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2cn\" (UniqueName: \"kubernetes.io/projected/f61600fc-fdc7-4879-b481-c76084539ee4-kube-api-access-qs2cn\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.948863 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-catalog-content\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.949053 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-utilities\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.949140 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2cn\" (UniqueName: \"kubernetes.io/projected/f61600fc-fdc7-4879-b481-c76084539ee4-kube-api-access-qs2cn\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.949427 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-catalog-content\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.949606 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-utilities\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:27 crc kubenswrapper[5028]: I1123 07:03:27.976044 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2cn\" (UniqueName: \"kubernetes.io/projected/f61600fc-fdc7-4879-b481-c76084539ee4-kube-api-access-qs2cn\") pod \"community-operators-ctmnx\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:28 crc kubenswrapper[5028]: I1123 07:03:28.083029 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:28 crc kubenswrapper[5028]: I1123 07:03:28.396941 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctmnx"] Nov 23 07:03:28 crc kubenswrapper[5028]: I1123 07:03:28.932647 5028 generic.go:334] "Generic (PLEG): container finished" podID="f61600fc-fdc7-4879-b481-c76084539ee4" containerID="1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335" exitCode=0 Nov 23 07:03:28 crc kubenswrapper[5028]: I1123 07:03:28.932693 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctmnx" event={"ID":"f61600fc-fdc7-4879-b481-c76084539ee4","Type":"ContainerDied","Data":"1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335"} Nov 23 07:03:28 crc kubenswrapper[5028]: I1123 07:03:28.932720 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctmnx" event={"ID":"f61600fc-fdc7-4879-b481-c76084539ee4","Type":"ContainerStarted","Data":"3df9c7f9f9860ce179d366056e19cf94f35ed403fe91dd7680be5b85336fe419"} Nov 23 07:03:29 crc kubenswrapper[5028]: I1123 07:03:29.942218 5028 generic.go:334] "Generic (PLEG): container finished" podID="f61600fc-fdc7-4879-b481-c76084539ee4" containerID="ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621" exitCode=0 Nov 23 07:03:29 crc kubenswrapper[5028]: I1123 07:03:29.942265 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctmnx" event={"ID":"f61600fc-fdc7-4879-b481-c76084539ee4","Type":"ContainerDied","Data":"ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621"} Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.426456 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4r98v" Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.745424 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.745834 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.751594 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.945940 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.946000 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.950244 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctmnx" event={"ID":"f61600fc-fdc7-4879-b481-c76084539ee4","Type":"ContainerStarted","Data":"3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632"} Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.953670 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5488f47b75-txxql" Nov 23 07:03:30 crc kubenswrapper[5028]: I1123 07:03:30.970776 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ctmnx" podStartSLOduration=2.428312294 podStartE2EDuration="3.970758344s" podCreationTimestamp="2025-11-23 07:03:27 +0000 UTC" firstStartedPulling="2025-11-23 07:03:28.934459309 +0000 UTC m=+792.631864088" lastFinishedPulling="2025-11-23 07:03:30.476905329 +0000 UTC m=+794.174310138" observedRunningTime="2025-11-23 07:03:30.969216526 +0000 UTC m=+794.666621305" watchObservedRunningTime="2025-11-23 07:03:30.970758344 +0000 UTC m=+794.668163123" Nov 23 07:03:31 crc kubenswrapper[5028]: I1123 07:03:31.023371 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pppdd"] Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.711013 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hnv5j"] Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.713636 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.774734 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-catalog-content\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.774814 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-utilities\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.774927 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmzr\" (UniqueName: \"kubernetes.io/projected/1be43991-5a16-4bbf-9ee6-f0a007f45e31-kube-api-access-zbmzr\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.876997 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-catalog-content\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.877058 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-utilities\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.877091 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbmzr\" (UniqueName: \"kubernetes.io/projected/1be43991-5a16-4bbf-9ee6-f0a007f45e31-kube-api-access-zbmzr\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.877799 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-catalog-content\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.878141 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-utilities\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.927818 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbmzr\" (UniqueName: \"kubernetes.io/projected/1be43991-5a16-4bbf-9ee6-f0a007f45e31-kube-api-access-zbmzr\") pod \"certified-operators-hnv5j\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:36 crc kubenswrapper[5028]: I1123 07:03:36.931226 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hnv5j"] Nov 23 07:03:37 crc kubenswrapper[5028]: I1123 07:03:37.029878 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:37 crc kubenswrapper[5028]: I1123 07:03:37.538082 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hnv5j"] Nov 23 07:03:37 crc kubenswrapper[5028]: W1123 07:03:37.556300 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1be43991_5a16_4bbf_9ee6_f0a007f45e31.slice/crio-682e0ae25c2cedd2436b9085b9021ce34055b7828bd78b9a6ecea38119e1d5b9 WatchSource:0}: Error finding container 682e0ae25c2cedd2436b9085b9021ce34055b7828bd78b9a6ecea38119e1d5b9: Status 404 returned error can't find the container with id 682e0ae25c2cedd2436b9085b9021ce34055b7828bd78b9a6ecea38119e1d5b9 Nov 23 07:03:37 crc kubenswrapper[5028]: I1123 07:03:37.995781 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerStarted","Data":"682e0ae25c2cedd2436b9085b9021ce34055b7828bd78b9a6ecea38119e1d5b9"} Nov 23 07:03:38 crc kubenswrapper[5028]: I1123 07:03:38.083672 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:38 crc kubenswrapper[5028]: I1123 07:03:38.083757 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:38 crc kubenswrapper[5028]: I1123 07:03:38.138106 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:39 crc kubenswrapper[5028]: I1123 07:03:39.005762 5028 generic.go:334] "Generic (PLEG): container finished" podID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerID="375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a" exitCode=0 Nov 23 07:03:39 crc kubenswrapper[5028]: I1123 07:03:39.008439 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerDied","Data":"375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a"} Nov 23 07:03:39 crc kubenswrapper[5028]: I1123 07:03:39.063123 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:40 crc kubenswrapper[5028]: I1123 07:03:40.015893 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerStarted","Data":"7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29"} Nov 23 07:03:40 crc kubenswrapper[5028]: I1123 07:03:40.380643 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-g2bqd" Nov 23 07:03:41 crc kubenswrapper[5028]: I1123 07:03:41.024431 5028 generic.go:334] "Generic (PLEG): container finished" podID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerID="7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29" exitCode=0 Nov 23 07:03:41 crc kubenswrapper[5028]: I1123 07:03:41.024478 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerDied","Data":"7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29"} Nov 23 07:03:41 crc kubenswrapper[5028]: I1123 07:03:41.695513 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctmnx"] Nov 23 07:03:41 crc kubenswrapper[5028]: I1123 07:03:41.695768 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ctmnx" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="registry-server" containerID="cri-o://3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632" gracePeriod=2 Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.733066 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.767609 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2cn\" (UniqueName: \"kubernetes.io/projected/f61600fc-fdc7-4879-b481-c76084539ee4-kube-api-access-qs2cn\") pod \"f61600fc-fdc7-4879-b481-c76084539ee4\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.767699 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-catalog-content\") pod \"f61600fc-fdc7-4879-b481-c76084539ee4\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.767821 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-utilities\") pod \"f61600fc-fdc7-4879-b481-c76084539ee4\" (UID: \"f61600fc-fdc7-4879-b481-c76084539ee4\") " Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.769854 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-utilities" (OuterVolumeSpecName: "utilities") pod "f61600fc-fdc7-4879-b481-c76084539ee4" (UID: "f61600fc-fdc7-4879-b481-c76084539ee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.798738 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61600fc-fdc7-4879-b481-c76084539ee4-kube-api-access-qs2cn" (OuterVolumeSpecName: "kube-api-access-qs2cn") pod "f61600fc-fdc7-4879-b481-c76084539ee4" (UID: "f61600fc-fdc7-4879-b481-c76084539ee4"). InnerVolumeSpecName "kube-api-access-qs2cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.845498 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f61600fc-fdc7-4879-b481-c76084539ee4" (UID: "f61600fc-fdc7-4879-b481-c76084539ee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.873466 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2cn\" (UniqueName: \"kubernetes.io/projected/f61600fc-fdc7-4879-b481-c76084539ee4-kube-api-access-qs2cn\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.873512 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:42 crc kubenswrapper[5028]: I1123 07:03:42.873528 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f61600fc-fdc7-4879-b481-c76084539ee4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.043072 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerStarted","Data":"79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca"} Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.046744 5028 generic.go:334] "Generic (PLEG): container finished" podID="f61600fc-fdc7-4879-b481-c76084539ee4" containerID="3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632" exitCode=0 Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.046819 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctmnx" event={"ID":"f61600fc-fdc7-4879-b481-c76084539ee4","Type":"ContainerDied","Data":"3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632"} Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.046856 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctmnx" event={"ID":"f61600fc-fdc7-4879-b481-c76084539ee4","Type":"ContainerDied","Data":"3df9c7f9f9860ce179d366056e19cf94f35ed403fe91dd7680be5b85336fe419"} Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.046882 5028 scope.go:117] "RemoveContainer" containerID="3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.046827 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctmnx" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.065307 5028 scope.go:117] "RemoveContainer" containerID="ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.071201 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hnv5j" podStartSLOduration=3.910451279 podStartE2EDuration="7.071176348s" podCreationTimestamp="2025-11-23 07:03:36 +0000 UTC" firstStartedPulling="2025-11-23 07:03:39.009159359 +0000 UTC m=+802.706564138" lastFinishedPulling="2025-11-23 07:03:42.169884428 +0000 UTC m=+805.867289207" observedRunningTime="2025-11-23 07:03:43.070337788 +0000 UTC m=+806.767742577" watchObservedRunningTime="2025-11-23 07:03:43.071176348 +0000 UTC m=+806.768581147" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.111177 5028 scope.go:117] "RemoveContainer" containerID="1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.157003 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctmnx"] Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.165160 5028 scope.go:117] "RemoveContainer" containerID="3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632" Nov 23 07:03:43 crc kubenswrapper[5028]: E1123 07:03:43.170707 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632\": container with ID starting with 3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632 not found: ID does not exist" containerID="3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.170746 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632"} err="failed to get container status \"3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632\": rpc error: code = NotFound desc = could not find container \"3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632\": container with ID starting with 3a240c7b762a58a1b01708ad13cb519093d8695bff7a1837919b24a720ed9632 not found: ID does not exist" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.170773 5028 scope.go:117] "RemoveContainer" containerID="ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621" Nov 23 07:03:43 crc kubenswrapper[5028]: E1123 07:03:43.171037 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621\": container with ID starting with ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621 not found: ID does not exist" containerID="ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.171059 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621"} err="failed to get container status \"ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621\": rpc error: code = NotFound desc = could not find container \"ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621\": container with ID starting with ed4ff70c5384afffdd465da9fa8cf3b2564f6ce25361b30df21b4e81d1e32621 not found: ID does not exist" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.171072 5028 scope.go:117] "RemoveContainer" containerID="1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335" Nov 23 07:03:43 crc kubenswrapper[5028]: E1123 07:03:43.171280 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335\": container with ID starting with 1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335 not found: ID does not exist" containerID="1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.171301 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335"} err="failed to get container status \"1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335\": rpc error: code = NotFound desc = could not find container \"1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335\": container with ID starting with 1517ab3373883aa39f96bafede03e564055c9373b065d7ed41d59bf4ef09e335 not found: ID does not exist" Nov 23 07:03:43 crc kubenswrapper[5028]: I1123 07:03:43.183960 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ctmnx"] Nov 23 07:03:45 crc kubenswrapper[5028]: I1123 07:03:45.061238 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" path="/var/lib/kubelet/pods/f61600fc-fdc7-4879-b481-c76084539ee4/volumes" Nov 23 07:03:47 crc kubenswrapper[5028]: I1123 07:03:47.030141 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:47 crc kubenswrapper[5028]: I1123 07:03:47.030591 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:47 crc kubenswrapper[5028]: I1123 07:03:47.078122 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:47 crc kubenswrapper[5028]: I1123 07:03:47.148331 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.301988 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hnv5j"] Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.303243 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hnv5j" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="registry-server" containerID="cri-o://79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca" gracePeriod=2 Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.778073 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.919792 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-catalog-content\") pod \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.920412 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-utilities\") pod \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.920455 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbmzr\" (UniqueName: \"kubernetes.io/projected/1be43991-5a16-4bbf-9ee6-f0a007f45e31-kube-api-access-zbmzr\") pod \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\" (UID: \"1be43991-5a16-4bbf-9ee6-f0a007f45e31\") " Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.924397 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-utilities" (OuterVolumeSpecName: "utilities") pod "1be43991-5a16-4bbf-9ee6-f0a007f45e31" (UID: "1be43991-5a16-4bbf-9ee6-f0a007f45e31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:50 crc kubenswrapper[5028]: I1123 07:03:50.928248 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be43991-5a16-4bbf-9ee6-f0a007f45e31-kube-api-access-zbmzr" (OuterVolumeSpecName: "kube-api-access-zbmzr") pod "1be43991-5a16-4bbf-9ee6-f0a007f45e31" (UID: "1be43991-5a16-4bbf-9ee6-f0a007f45e31"). InnerVolumeSpecName "kube-api-access-zbmzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.023200 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.023257 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbmzr\" (UniqueName: \"kubernetes.io/projected/1be43991-5a16-4bbf-9ee6-f0a007f45e31-kube-api-access-zbmzr\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.104399 5028 generic.go:334] "Generic (PLEG): container finished" podID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerID="79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca" exitCode=0 Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.104475 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hnv5j" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.104528 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerDied","Data":"79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca"} Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.104631 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hnv5j" event={"ID":"1be43991-5a16-4bbf-9ee6-f0a007f45e31","Type":"ContainerDied","Data":"682e0ae25c2cedd2436b9085b9021ce34055b7828bd78b9a6ecea38119e1d5b9"} Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.104669 5028 scope.go:117] "RemoveContainer" containerID="79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.132032 5028 scope.go:117] "RemoveContainer" containerID="7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.150904 5028 scope.go:117] "RemoveContainer" containerID="375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.181093 5028 scope.go:117] "RemoveContainer" containerID="79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca" Nov 23 07:03:51 crc kubenswrapper[5028]: E1123 07:03:51.181740 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca\": container with ID starting with 79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca not found: ID does not exist" containerID="79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.181934 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca"} err="failed to get container status \"79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca\": rpc error: code = NotFound desc = could not find container \"79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca\": container with ID starting with 79283ff54b92a262cb00ddc2948d0f0dd99a7e0251dda300d1002390248ddfca not found: ID does not exist" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.182035 5028 scope.go:117] "RemoveContainer" containerID="7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29" Nov 23 07:03:51 crc kubenswrapper[5028]: E1123 07:03:51.182663 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29\": container with ID starting with 7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29 not found: ID does not exist" containerID="7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.182711 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29"} err="failed to get container status \"7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29\": rpc error: code = NotFound desc = could not find container \"7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29\": container with ID starting with 7b4b1cabf6ff1645e1be288fd96a49e6a1bac03ba41701c71efd26cc662adf29 not found: ID does not exist" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.182746 5028 scope.go:117] "RemoveContainer" containerID="375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a" Nov 23 07:03:51 crc kubenswrapper[5028]: E1123 07:03:51.183175 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a\": container with ID starting with 375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a not found: ID does not exist" containerID="375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.183210 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a"} err="failed to get container status \"375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a\": rpc error: code = NotFound desc = could not find container \"375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a\": container with ID starting with 375f3486875965b04f884f8e0767e3d0135aec39abf114fcfcf6493e1e5f362a not found: ID does not exist" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.820892 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1be43991-5a16-4bbf-9ee6-f0a007f45e31" (UID: "1be43991-5a16-4bbf-9ee6-f0a007f45e31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:03:51 crc kubenswrapper[5028]: I1123 07:03:51.837235 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1be43991-5a16-4bbf-9ee6-f0a007f45e31-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:52 crc kubenswrapper[5028]: I1123 07:03:52.076456 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hnv5j"] Nov 23 07:03:52 crc kubenswrapper[5028]: I1123 07:03:52.081237 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hnv5j"] Nov 23 07:03:53 crc kubenswrapper[5028]: I1123 07:03:53.065359 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" path="/var/lib/kubelet/pods/1be43991-5a16-4bbf-9ee6-f0a007f45e31/volumes" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.973572 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb"] Nov 23 07:03:54 crc kubenswrapper[5028]: E1123 07:03:54.974029 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="registry-server" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974054 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="registry-server" Nov 23 07:03:54 crc kubenswrapper[5028]: E1123 07:03:54.974073 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="extract-utilities" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974089 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="extract-utilities" Nov 23 07:03:54 crc kubenswrapper[5028]: E1123 07:03:54.974110 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="extract-content" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974126 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="extract-content" Nov 23 07:03:54 crc kubenswrapper[5028]: E1123 07:03:54.974146 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="extract-utilities" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974158 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="extract-utilities" Nov 23 07:03:54 crc kubenswrapper[5028]: E1123 07:03:54.974187 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="registry-server" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974204 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="registry-server" Nov 23 07:03:54 crc kubenswrapper[5028]: E1123 07:03:54.974228 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="extract-content" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974242 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="extract-content" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974425 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61600fc-fdc7-4879-b481-c76084539ee4" containerName="registry-server" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.974461 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be43991-5a16-4bbf-9ee6-f0a007f45e31" containerName="registry-server" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.976115 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.979264 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 07:03:54 crc kubenswrapper[5028]: I1123 07:03:54.987422 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb"] Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.089878 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.090000 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl6xr\" (UniqueName: \"kubernetes.io/projected/abcac30c-4771-4bbd-a67c-780b61670e1c-kube-api-access-vl6xr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.090057 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.192377 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.192795 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl6xr\" (UniqueName: \"kubernetes.io/projected/abcac30c-4771-4bbd-a67c-780b61670e1c-kube-api-access-vl6xr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.193146 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.193236 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.193471 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.223864 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl6xr\" (UniqueName: \"kubernetes.io/projected/abcac30c-4771-4bbd-a67c-780b61670e1c-kube-api-access-vl6xr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.317175 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:03:55 crc kubenswrapper[5028]: I1123 07:03:55.570522 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb"] Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.068826 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-pppdd" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" containerID="cri-o://6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d" gracePeriod=15 Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.145800 5028 generic.go:334] "Generic (PLEG): container finished" podID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerID="b4f83badb325ce7f612f0cb77e0dcdced129664d6ade067acc0d1840f25c4c46" exitCode=0 Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.145916 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" event={"ID":"abcac30c-4771-4bbd-a67c-780b61670e1c","Type":"ContainerDied","Data":"b4f83badb325ce7f612f0cb77e0dcdced129664d6ade067acc0d1840f25c4c46"} Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.146036 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" event={"ID":"abcac30c-4771-4bbd-a67c-780b61670e1c","Type":"ContainerStarted","Data":"986900d10aeeff60960b6a8292a67b1aa1cad61aa91305a9f5dd504098c0d098"} Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.444782 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pppdd_872d66c4-4f5a-4067-8aa5-5cb7b56b9f94/console/0.log" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.445431 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515144 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-oauth-serving-cert\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515208 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-service-ca\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515230 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-oauth-config\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515251 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-serving-cert\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515273 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcj7n\" (UniqueName: \"kubernetes.io/projected/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-kube-api-access-kcj7n\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515305 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-config\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.515323 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-trusted-ca-bundle\") pod \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\" (UID: \"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94\") " Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.516285 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-service-ca" (OuterVolumeSpecName: "service-ca") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.516311 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.516333 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.516387 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-config" (OuterVolumeSpecName: "console-config") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.522523 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.523016 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.525404 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-kube-api-access-kcj7n" (OuterVolumeSpecName: "kube-api-access-kcj7n") pod "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" (UID: "872d66c4-4f5a-4067-8aa5-5cb7b56b9f94"). InnerVolumeSpecName "kube-api-access-kcj7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616381 5028 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616464 5028 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-service-ca\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616476 5028 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616490 5028 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616503 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcj7n\" (UniqueName: \"kubernetes.io/projected/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-kube-api-access-kcj7n\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616517 5028 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-console-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:56 crc kubenswrapper[5028]: I1123 07:03:56.616528 5028 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.153410 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-pppdd_872d66c4-4f5a-4067-8aa5-5cb7b56b9f94/console/0.log" Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.153470 5028 generic.go:334] "Generic (PLEG): container finished" podID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerID="6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d" exitCode=2 Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.153503 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pppdd" event={"ID":"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94","Type":"ContainerDied","Data":"6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d"} Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.153544 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-pppdd" event={"ID":"872d66c4-4f5a-4067-8aa5-5cb7b56b9f94","Type":"ContainerDied","Data":"5e5615e89f47ca1ca54c0efe767cd890ed934e890372093e783902e978329196"} Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.153564 5028 scope.go:117] "RemoveContainer" containerID="6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d" Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.153704 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-pppdd" Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.174247 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-pppdd"] Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.175430 5028 scope.go:117] "RemoveContainer" containerID="6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d" Nov 23 07:03:57 crc kubenswrapper[5028]: E1123 07:03:57.175804 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d\": container with ID starting with 6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d not found: ID does not exist" containerID="6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d" Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.175863 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d"} err="failed to get container status \"6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d\": rpc error: code = NotFound desc = could not find container \"6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d\": container with ID starting with 6e8e927666e959366baafc32ddf678497723dc48ff3130317558eed014caa48d not found: ID does not exist" Nov 23 07:03:57 crc kubenswrapper[5028]: I1123 07:03:57.179779 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-pppdd"] Nov 23 07:03:58 crc kubenswrapper[5028]: I1123 07:03:58.161932 5028 generic.go:334] "Generic (PLEG): container finished" podID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerID="bfdccb6093cfd553d014f79b0588802b808804b8ebd6b13b04bec165ba37b585" exitCode=0 Nov 23 07:03:58 crc kubenswrapper[5028]: I1123 07:03:58.161998 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" event={"ID":"abcac30c-4771-4bbd-a67c-780b61670e1c","Type":"ContainerDied","Data":"bfdccb6093cfd553d014f79b0588802b808804b8ebd6b13b04bec165ba37b585"} Nov 23 07:03:59 crc kubenswrapper[5028]: I1123 07:03:59.063626 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" path="/var/lib/kubelet/pods/872d66c4-4f5a-4067-8aa5-5cb7b56b9f94/volumes" Nov 23 07:03:59 crc kubenswrapper[5028]: I1123 07:03:59.172743 5028 generic.go:334] "Generic (PLEG): container finished" podID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerID="24320ef5081b4719968e57660ec3069076639afdd6762ff647631cfbdb6cd4ec" exitCode=0 Nov 23 07:03:59 crc kubenswrapper[5028]: I1123 07:03:59.172824 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" event={"ID":"abcac30c-4771-4bbd-a67c-780b61670e1c","Type":"ContainerDied","Data":"24320ef5081b4719968e57660ec3069076639afdd6762ff647631cfbdb6cd4ec"} Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.495428 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.679027 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-bundle\") pod \"abcac30c-4771-4bbd-a67c-780b61670e1c\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.679739 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl6xr\" (UniqueName: \"kubernetes.io/projected/abcac30c-4771-4bbd-a67c-780b61670e1c-kube-api-access-vl6xr\") pod \"abcac30c-4771-4bbd-a67c-780b61670e1c\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.679802 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-util\") pod \"abcac30c-4771-4bbd-a67c-780b61670e1c\" (UID: \"abcac30c-4771-4bbd-a67c-780b61670e1c\") " Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.681773 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-bundle" (OuterVolumeSpecName: "bundle") pod "abcac30c-4771-4bbd-a67c-780b61670e1c" (UID: "abcac30c-4771-4bbd-a67c-780b61670e1c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.688465 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcac30c-4771-4bbd-a67c-780b61670e1c-kube-api-access-vl6xr" (OuterVolumeSpecName: "kube-api-access-vl6xr") pod "abcac30c-4771-4bbd-a67c-780b61670e1c" (UID: "abcac30c-4771-4bbd-a67c-780b61670e1c"). InnerVolumeSpecName "kube-api-access-vl6xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.693416 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-util" (OuterVolumeSpecName: "util") pod "abcac30c-4771-4bbd-a67c-780b61670e1c" (UID: "abcac30c-4771-4bbd-a67c-780b61670e1c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.781004 5028 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.781050 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl6xr\" (UniqueName: \"kubernetes.io/projected/abcac30c-4771-4bbd-a67c-780b61670e1c-kube-api-access-vl6xr\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.781066 5028 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/abcac30c-4771-4bbd-a67c-780b61670e1c-util\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.946981 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:04:00 crc kubenswrapper[5028]: I1123 07:04:00.947090 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:04:01 crc kubenswrapper[5028]: I1123 07:04:01.191637 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" event={"ID":"abcac30c-4771-4bbd-a67c-780b61670e1c","Type":"ContainerDied","Data":"986900d10aeeff60960b6a8292a67b1aa1cad61aa91305a9f5dd504098c0d098"} Nov 23 07:04:01 crc kubenswrapper[5028]: I1123 07:04:01.191717 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986900d10aeeff60960b6a8292a67b1aa1cad61aa91305a9f5dd504098c0d098" Nov 23 07:04:01 crc kubenswrapper[5028]: I1123 07:04:01.191690 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.113473 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mrkdv"] Nov 23 07:04:12 crc kubenswrapper[5028]: E1123 07:04:12.114358 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.114376 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" Nov 23 07:04:12 crc kubenswrapper[5028]: E1123 07:04:12.114393 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="extract" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.114402 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="extract" Nov 23 07:04:12 crc kubenswrapper[5028]: E1123 07:04:12.114416 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="util" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.114422 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="util" Nov 23 07:04:12 crc kubenswrapper[5028]: E1123 07:04:12.114435 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="pull" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.114440 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="pull" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.114543 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcac30c-4771-4bbd-a67c-780b61670e1c" containerName="extract" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.114558 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="872d66c4-4f5a-4067-8aa5-5cb7b56b9f94" containerName="console" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.115426 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.127219 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrkdv"] Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.242479 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdxk\" (UniqueName: \"kubernetes.io/projected/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-kube-api-access-hgdxk\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.242724 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-utilities\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.242829 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-catalog-content\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.345091 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdxk\" (UniqueName: \"kubernetes.io/projected/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-kube-api-access-hgdxk\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.345174 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-utilities\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.345198 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-catalog-content\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.345794 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-catalog-content\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.345998 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-utilities\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.373038 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdxk\" (UniqueName: \"kubernetes.io/projected/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-kube-api-access-hgdxk\") pod \"redhat-marketplace-mrkdv\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.431830 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:12 crc kubenswrapper[5028]: I1123 07:04:12.907300 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrkdv"] Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.258427 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrkdv" event={"ID":"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6","Type":"ContainerStarted","Data":"c88c795378ded144ec603bbf9b65cb9684770b35c0003798a0581ef23b43dd1f"} Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.310673 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf"] Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.311932 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.314988 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.315526 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-kvn5l" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.315702 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.315967 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.328064 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.332423 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf"] Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.458540 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8730ad9d-67c7-4740-acd9-d3e5585890bb-webhook-cert\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.458596 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8730ad9d-67c7-4740-acd9-d3e5585890bb-apiservice-cert\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.458637 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjmz\" (UniqueName: \"kubernetes.io/projected/8730ad9d-67c7-4740-acd9-d3e5585890bb-kube-api-access-fzjmz\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.560414 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzjmz\" (UniqueName: \"kubernetes.io/projected/8730ad9d-67c7-4740-acd9-d3e5585890bb-kube-api-access-fzjmz\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.560564 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8730ad9d-67c7-4740-acd9-d3e5585890bb-webhook-cert\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.560614 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8730ad9d-67c7-4740-acd9-d3e5585890bb-apiservice-cert\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.567942 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8730ad9d-67c7-4740-acd9-d3e5585890bb-webhook-cert\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.576156 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8730ad9d-67c7-4740-acd9-d3e5585890bb-apiservice-cert\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.585257 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzjmz\" (UniqueName: \"kubernetes.io/projected/8730ad9d-67c7-4740-acd9-d3e5585890bb-kube-api-access-fzjmz\") pod \"metallb-operator-controller-manager-7c4fd7cf5d-jsllf\" (UID: \"8730ad9d-67c7-4740-acd9-d3e5585890bb\") " pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.632144 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.777899 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7"] Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.791089 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.798248 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-zp99l" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.798426 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.798678 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.809165 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7"] Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.964556 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4br4\" (UniqueName: \"kubernetes.io/projected/1ec8929e-da82-4b5e-9017-052092ec1e9a-kube-api-access-t4br4\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.964601 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ec8929e-da82-4b5e-9017-052092ec1e9a-webhook-cert\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:13 crc kubenswrapper[5028]: I1123 07:04:13.964629 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ec8929e-da82-4b5e-9017-052092ec1e9a-apiservice-cert\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.026775 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf"] Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.067197 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4br4\" (UniqueName: \"kubernetes.io/projected/1ec8929e-da82-4b5e-9017-052092ec1e9a-kube-api-access-t4br4\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.067599 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ec8929e-da82-4b5e-9017-052092ec1e9a-webhook-cert\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.067686 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ec8929e-da82-4b5e-9017-052092ec1e9a-apiservice-cert\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.073615 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1ec8929e-da82-4b5e-9017-052092ec1e9a-apiservice-cert\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.087656 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1ec8929e-da82-4b5e-9017-052092ec1e9a-webhook-cert\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.095622 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4br4\" (UniqueName: \"kubernetes.io/projected/1ec8929e-da82-4b5e-9017-052092ec1e9a-kube-api-access-t4br4\") pod \"metallb-operator-webhook-server-6d96c8b774-bv8l7\" (UID: \"1ec8929e-da82-4b5e-9017-052092ec1e9a\") " pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.124805 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.277379 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" event={"ID":"8730ad9d-67c7-4740-acd9-d3e5585890bb","Type":"ContainerStarted","Data":"de00eb58c87fc4503935f2255d8e67545f40dbd5d50e6ce30f98ca8a4d8d3e52"} Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.292362 5028 generic.go:334] "Generic (PLEG): container finished" podID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerID="2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b" exitCode=0 Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.292412 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrkdv" event={"ID":"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6","Type":"ContainerDied","Data":"2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b"} Nov 23 07:04:14 crc kubenswrapper[5028]: I1123 07:04:14.366802 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7"] Nov 23 07:04:14 crc kubenswrapper[5028]: W1123 07:04:14.367025 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ec8929e_da82_4b5e_9017_052092ec1e9a.slice/crio-c45da385ff1be081da0a2da2f071d2266462b3fb5a236c745270e9d83e52c569 WatchSource:0}: Error finding container c45da385ff1be081da0a2da2f071d2266462b3fb5a236c745270e9d83e52c569: Status 404 returned error can't find the container with id c45da385ff1be081da0a2da2f071d2266462b3fb5a236c745270e9d83e52c569 Nov 23 07:04:15 crc kubenswrapper[5028]: I1123 07:04:15.300288 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" event={"ID":"1ec8929e-da82-4b5e-9017-052092ec1e9a","Type":"ContainerStarted","Data":"c45da385ff1be081da0a2da2f071d2266462b3fb5a236c745270e9d83e52c569"} Nov 23 07:04:20 crc kubenswrapper[5028]: I1123 07:04:20.343420 5028 generic.go:334] "Generic (PLEG): container finished" podID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerID="b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c" exitCode=0 Nov 23 07:04:20 crc kubenswrapper[5028]: I1123 07:04:20.343497 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrkdv" event={"ID":"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6","Type":"ContainerDied","Data":"b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c"} Nov 23 07:04:22 crc kubenswrapper[5028]: I1123 07:04:22.360204 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" event={"ID":"8730ad9d-67c7-4740-acd9-d3e5585890bb","Type":"ContainerStarted","Data":"9a9e36f20d27c7e35d91fb7e73fdacdea68d8c8aeff583b330ac1a735b07879d"} Nov 23 07:04:22 crc kubenswrapper[5028]: I1123 07:04:22.362786 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:22 crc kubenswrapper[5028]: I1123 07:04:22.396807 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" podStartSLOduration=1.579385568 podStartE2EDuration="9.396774411s" podCreationTimestamp="2025-11-23 07:04:13 +0000 UTC" firstStartedPulling="2025-11-23 07:04:14.021307077 +0000 UTC m=+837.718711856" lastFinishedPulling="2025-11-23 07:04:21.83869591 +0000 UTC m=+845.536100699" observedRunningTime="2025-11-23 07:04:22.39424101 +0000 UTC m=+846.091645809" watchObservedRunningTime="2025-11-23 07:04:22.396774411 +0000 UTC m=+846.094179200" Nov 23 07:04:23 crc kubenswrapper[5028]: I1123 07:04:23.367977 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" event={"ID":"1ec8929e-da82-4b5e-9017-052092ec1e9a","Type":"ContainerStarted","Data":"8063429eb8546ab6734d7e1cddeef5da420326f4c6ad4dbf83f9ea07265d4365"} Nov 23 07:04:23 crc kubenswrapper[5028]: I1123 07:04:23.369224 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:23 crc kubenswrapper[5028]: I1123 07:04:23.370553 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrkdv" event={"ID":"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6","Type":"ContainerStarted","Data":"5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1"} Nov 23 07:04:23 crc kubenswrapper[5028]: I1123 07:04:23.395818 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" podStartSLOduration=1.956496491 podStartE2EDuration="10.395790195s" podCreationTimestamp="2025-11-23 07:04:13 +0000 UTC" firstStartedPulling="2025-11-23 07:04:14.369825534 +0000 UTC m=+838.067230313" lastFinishedPulling="2025-11-23 07:04:22.809119198 +0000 UTC m=+846.506524017" observedRunningTime="2025-11-23 07:04:23.392606838 +0000 UTC m=+847.090011647" watchObservedRunningTime="2025-11-23 07:04:23.395790195 +0000 UTC m=+847.093194994" Nov 23 07:04:23 crc kubenswrapper[5028]: I1123 07:04:23.431727 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mrkdv" podStartSLOduration=2.934959519 podStartE2EDuration="11.431673646s" podCreationTimestamp="2025-11-23 07:04:12 +0000 UTC" firstStartedPulling="2025-11-23 07:04:14.296169757 +0000 UTC m=+837.993574546" lastFinishedPulling="2025-11-23 07:04:22.792883894 +0000 UTC m=+846.490288673" observedRunningTime="2025-11-23 07:04:23.424694597 +0000 UTC m=+847.122099386" watchObservedRunningTime="2025-11-23 07:04:23.431673646 +0000 UTC m=+847.129078445" Nov 23 07:04:30 crc kubenswrapper[5028]: I1123 07:04:30.946335 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:04:30 crc kubenswrapper[5028]: I1123 07:04:30.947329 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:04:30 crc kubenswrapper[5028]: I1123 07:04:30.947400 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:04:30 crc kubenswrapper[5028]: I1123 07:04:30.948317 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ae2ca72370a9e3e15bd4e9680a68748662e74c8a868e4dcea0405be9f5e30cb"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:04:30 crc kubenswrapper[5028]: I1123 07:04:30.948397 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://7ae2ca72370a9e3e15bd4e9680a68748662e74c8a868e4dcea0405be9f5e30cb" gracePeriod=600 Nov 23 07:04:31 crc kubenswrapper[5028]: I1123 07:04:31.424227 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="7ae2ca72370a9e3e15bd4e9680a68748662e74c8a868e4dcea0405be9f5e30cb" exitCode=0 Nov 23 07:04:31 crc kubenswrapper[5028]: I1123 07:04:31.424312 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"7ae2ca72370a9e3e15bd4e9680a68748662e74c8a868e4dcea0405be9f5e30cb"} Nov 23 07:04:31 crc kubenswrapper[5028]: I1123 07:04:31.424399 5028 scope.go:117] "RemoveContainer" containerID="51d7af075637cdb224580c90d38df94102b585b6fbc5d3e5527e95235bfff29d" Nov 23 07:04:32 crc kubenswrapper[5028]: I1123 07:04:32.432200 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd"} Nov 23 07:04:32 crc kubenswrapper[5028]: I1123 07:04:32.432813 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:32 crc kubenswrapper[5028]: I1123 07:04:32.432836 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:32 crc kubenswrapper[5028]: I1123 07:04:32.481323 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:33 crc kubenswrapper[5028]: I1123 07:04:33.485234 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:34 crc kubenswrapper[5028]: I1123 07:04:34.129754 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d96c8b774-bv8l7" Nov 23 07:04:34 crc kubenswrapper[5028]: I1123 07:04:34.895508 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrkdv"] Nov 23 07:04:35 crc kubenswrapper[5028]: I1123 07:04:35.448434 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mrkdv" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="registry-server" containerID="cri-o://5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1" gracePeriod=2 Nov 23 07:04:35 crc kubenswrapper[5028]: I1123 07:04:35.851619 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:35 crc kubenswrapper[5028]: I1123 07:04:35.997008 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgdxk\" (UniqueName: \"kubernetes.io/projected/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-kube-api-access-hgdxk\") pod \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " Nov 23 07:04:35 crc kubenswrapper[5028]: I1123 07:04:35.997118 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-utilities\") pod \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " Nov 23 07:04:35 crc kubenswrapper[5028]: I1123 07:04:35.997220 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-catalog-content\") pod \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\" (UID: \"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6\") " Nov 23 07:04:35 crc kubenswrapper[5028]: I1123 07:04:35.998150 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-utilities" (OuterVolumeSpecName: "utilities") pod "d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" (UID: "d5de94d2-590e-48c7-b32e-e2e0d35e3ca6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.002738 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-kube-api-access-hgdxk" (OuterVolumeSpecName: "kube-api-access-hgdxk") pod "d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" (UID: "d5de94d2-590e-48c7-b32e-e2e0d35e3ca6"). InnerVolumeSpecName "kube-api-access-hgdxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.020923 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" (UID: "d5de94d2-590e-48c7-b32e-e2e0d35e3ca6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.098162 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgdxk\" (UniqueName: \"kubernetes.io/projected/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-kube-api-access-hgdxk\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.098564 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.098575 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.454988 5028 generic.go:334] "Generic (PLEG): container finished" podID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerID="5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1" exitCode=0 Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.455049 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrkdv" event={"ID":"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6","Type":"ContainerDied","Data":"5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1"} Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.455064 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrkdv" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.455088 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrkdv" event={"ID":"d5de94d2-590e-48c7-b32e-e2e0d35e3ca6","Type":"ContainerDied","Data":"c88c795378ded144ec603bbf9b65cb9684770b35c0003798a0581ef23b43dd1f"} Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.455143 5028 scope.go:117] "RemoveContainer" containerID="5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.469321 5028 scope.go:117] "RemoveContainer" containerID="b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.477289 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrkdv"] Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.481090 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrkdv"] Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.495537 5028 scope.go:117] "RemoveContainer" containerID="2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.509982 5028 scope.go:117] "RemoveContainer" containerID="5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1" Nov 23 07:04:36 crc kubenswrapper[5028]: E1123 07:04:36.510550 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1\": container with ID starting with 5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1 not found: ID does not exist" containerID="5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.510579 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1"} err="failed to get container status \"5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1\": rpc error: code = NotFound desc = could not find container \"5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1\": container with ID starting with 5c1e87b852fdb3f6a947f6aa50cbf44dba80e4144703ca61144751ba37e9f4c1 not found: ID does not exist" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.510599 5028 scope.go:117] "RemoveContainer" containerID="b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c" Nov 23 07:04:36 crc kubenswrapper[5028]: E1123 07:04:36.510771 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c\": container with ID starting with b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c not found: ID does not exist" containerID="b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.510786 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c"} err="failed to get container status \"b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c\": rpc error: code = NotFound desc = could not find container \"b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c\": container with ID starting with b27e06466996b97612b7e94f46f3ed8dc2f34d8ef7c2eebad65df868f0706a8c not found: ID does not exist" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.510798 5028 scope.go:117] "RemoveContainer" containerID="2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b" Nov 23 07:04:36 crc kubenswrapper[5028]: E1123 07:04:36.510969 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b\": container with ID starting with 2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b not found: ID does not exist" containerID="2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b" Nov 23 07:04:36 crc kubenswrapper[5028]: I1123 07:04:36.510987 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b"} err="failed to get container status \"2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b\": rpc error: code = NotFound desc = could not find container \"2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b\": container with ID starting with 2e941d00492249f97bf1fd6cfe81e32ce813a7420965e1d7afd169e9da6b284b not found: ID does not exist" Nov 23 07:04:37 crc kubenswrapper[5028]: I1123 07:04:37.060595 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" path="/var/lib/kubelet/pods/d5de94d2-590e-48c7-b32e-e2e0d35e3ca6/volumes" Nov 23 07:04:53 crc kubenswrapper[5028]: I1123 07:04:53.634810 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7c4fd7cf5d-jsllf" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.336085 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7"] Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.336385 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="extract-utilities" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.336407 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="extract-utilities" Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.336424 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="registry-server" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.336432 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="registry-server" Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.336451 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="extract-content" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.336459 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="extract-content" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.336592 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5de94d2-590e-48c7-b32e-e2e0d35e3ca6" containerName="registry-server" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.337073 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.339513 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.339886 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-xflvk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.344876 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-b8wkk"] Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.353117 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.356598 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7"] Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.357944 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.358248 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.428411 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-hcgg8"] Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.429864 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.435463 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q6txc" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.435782 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.435939 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.436089 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.466510 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-hmhtp"] Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.468393 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.472896 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.499199 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-hmhtp"] Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508139 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f994dd2d-8a9e-45c3-9bb3-91639e07482d-metrics-certs\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508203 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508233 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics-certs\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508267 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48f9f\" (UniqueName: \"kubernetes.io/projected/f994dd2d-8a9e-45c3-9bb3-91639e07482d-kube-api-access-48f9f\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508291 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-conf\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508308 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62pt\" (UniqueName: \"kubernetes.io/projected/99cd1f08-746f-4db3-bd8f-2505bee5ce57-kube-api-access-m62pt\") pod \"frr-k8s-webhook-server-6998585d5-6nxm7\" (UID: \"99cd1f08-746f-4db3-bd8f-2505bee5ce57\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508323 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-metrics-certs\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508338 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brl7n\" (UniqueName: \"kubernetes.io/projected/2409cec3-0c1f-4c76-846f-2e1bf5b24258-kube-api-access-brl7n\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508358 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f994dd2d-8a9e-45c3-9bb3-91639e07482d-cert\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508380 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508407 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99cd1f08-746f-4db3-bd8f-2505bee5ce57-cert\") pod \"frr-k8s-webhook-server-6998585d5-6nxm7\" (UID: \"99cd1f08-746f-4db3-bd8f-2505bee5ce57\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508444 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-sockets\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508463 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-reloader\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508511 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-startup\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508569 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4hx\" (UniqueName: \"kubernetes.io/projected/b9813473-eba8-4ab8-9778-1e96817ebcc7-kube-api-access-sz4hx\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.508583 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2409cec3-0c1f-4c76-846f-2e1bf5b24258-metallb-excludel2\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609680 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48f9f\" (UniqueName: \"kubernetes.io/projected/f994dd2d-8a9e-45c3-9bb3-91639e07482d-kube-api-access-48f9f\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609738 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62pt\" (UniqueName: \"kubernetes.io/projected/99cd1f08-746f-4db3-bd8f-2505bee5ce57-kube-api-access-m62pt\") pod \"frr-k8s-webhook-server-6998585d5-6nxm7\" (UID: \"99cd1f08-746f-4db3-bd8f-2505bee5ce57\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609769 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-conf\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609793 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-metrics-certs\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609815 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brl7n\" (UniqueName: \"kubernetes.io/projected/2409cec3-0c1f-4c76-846f-2e1bf5b24258-kube-api-access-brl7n\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609838 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f994dd2d-8a9e-45c3-9bb3-91639e07482d-cert\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609865 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609892 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99cd1f08-746f-4db3-bd8f-2505bee5ce57-cert\") pod \"frr-k8s-webhook-server-6998585d5-6nxm7\" (UID: \"99cd1f08-746f-4db3-bd8f-2505bee5ce57\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609932 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-sockets\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.609974 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-reloader\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610015 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-startup\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610060 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4hx\" (UniqueName: \"kubernetes.io/projected/b9813473-eba8-4ab8-9778-1e96817ebcc7-kube-api-access-sz4hx\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610081 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2409cec3-0c1f-4c76-846f-2e1bf5b24258-metallb-excludel2\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610105 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f994dd2d-8a9e-45c3-9bb3-91639e07482d-metrics-certs\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610127 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610156 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics-certs\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.610274 5028 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.610329 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics-certs podName:b9813473-eba8-4ab8-9778-1e96817ebcc7 nodeName:}" failed. No retries permitted until 2025-11-23 07:04:55.11030891 +0000 UTC m=+878.807713679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics-certs") pod "frr-k8s-b8wkk" (UID: "b9813473-eba8-4ab8-9778-1e96817ebcc7") : secret "frr-k8s-certs-secret" not found Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.610566 5028 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610569 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-sockets\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: E1123 07:04:54.610600 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist podName:2409cec3-0c1f-4c76-846f-2e1bf5b24258 nodeName:}" failed. No retries permitted until 2025-11-23 07:04:55.110591047 +0000 UTC m=+878.807995826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist") pod "speaker-hcgg8" (UID: "2409cec3-0c1f-4c76-846f-2e1bf5b24258") : secret "metallb-memberlist" not found Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610695 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-conf\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610854 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-reloader\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.610941 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.611386 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2409cec3-0c1f-4c76-846f-2e1bf5b24258-metallb-excludel2\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.611474 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b9813473-eba8-4ab8-9778-1e96817ebcc7-frr-startup\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.614278 5028 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.620006 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-metrics-certs\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.625481 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f994dd2d-8a9e-45c3-9bb3-91639e07482d-cert\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.628056 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brl7n\" (UniqueName: \"kubernetes.io/projected/2409cec3-0c1f-4c76-846f-2e1bf5b24258-kube-api-access-brl7n\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.628811 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f994dd2d-8a9e-45c3-9bb3-91639e07482d-metrics-certs\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.628894 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99cd1f08-746f-4db3-bd8f-2505bee5ce57-cert\") pod \"frr-k8s-webhook-server-6998585d5-6nxm7\" (UID: \"99cd1f08-746f-4db3-bd8f-2505bee5ce57\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.629512 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62pt\" (UniqueName: \"kubernetes.io/projected/99cd1f08-746f-4db3-bd8f-2505bee5ce57-kube-api-access-m62pt\") pod \"frr-k8s-webhook-server-6998585d5-6nxm7\" (UID: \"99cd1f08-746f-4db3-bd8f-2505bee5ce57\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.635194 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4hx\" (UniqueName: \"kubernetes.io/projected/b9813473-eba8-4ab8-9778-1e96817ebcc7-kube-api-access-sz4hx\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.640584 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48f9f\" (UniqueName: \"kubernetes.io/projected/f994dd2d-8a9e-45c3-9bb3-91639e07482d-kube-api-access-48f9f\") pod \"controller-6c7b4b5f48-hmhtp\" (UID: \"f994dd2d-8a9e-45c3-9bb3-91639e07482d\") " pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.673701 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.800299 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:54 crc kubenswrapper[5028]: I1123 07:04:54.986995 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7"] Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.116798 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:55 crc kubenswrapper[5028]: E1123 07:04:55.116967 5028 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 23 07:04:55 crc kubenswrapper[5028]: E1123 07:04:55.117151 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist podName:2409cec3-0c1f-4c76-846f-2e1bf5b24258 nodeName:}" failed. No retries permitted until 2025-11-23 07:04:56.117124669 +0000 UTC m=+879.814529478 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist") pod "speaker-hcgg8" (UID: "2409cec3-0c1f-4c76-846f-2e1bf5b24258") : secret "metallb-memberlist" not found Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.117044 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics-certs\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.123487 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9813473-eba8-4ab8-9778-1e96817ebcc7-metrics-certs\") pod \"frr-k8s-b8wkk\" (UID: \"b9813473-eba8-4ab8-9778-1e96817ebcc7\") " pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.248173 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-hmhtp"] Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.287679 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.578011 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"e89a60c7661f6904c6963a4fa8fcc6f28569d2a64978f36f0d96c6421d87f128"} Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.579749 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-hmhtp" event={"ID":"f994dd2d-8a9e-45c3-9bb3-91639e07482d","Type":"ContainerStarted","Data":"3666143c71ca2432046f3d488c3f354855e9091012e5e321c142ea309afcfe1c"} Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.579822 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-hmhtp" event={"ID":"f994dd2d-8a9e-45c3-9bb3-91639e07482d","Type":"ContainerStarted","Data":"82a637057c5b1b027b415b432ab6766a6c7c8dbbb55ccf3aa142b00f3045d129"} Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.579842 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-hmhtp" event={"ID":"f994dd2d-8a9e-45c3-9bb3-91639e07482d","Type":"ContainerStarted","Data":"6ed3060a77f69be328ec9a2c28fa1eef38d698b4e4dea2c0f5c6fe56ed10d24e"} Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.579864 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.581275 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" event={"ID":"99cd1f08-746f-4db3-bd8f-2505bee5ce57","Type":"ContainerStarted","Data":"84b73fa9987758c4dfbba41ae0d1d48cca78fe70b71d4a3ca17f5e58e251e36d"} Nov 23 07:04:55 crc kubenswrapper[5028]: I1123 07:04:55.600131 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-hmhtp" podStartSLOduration=1.600099799 podStartE2EDuration="1.600099799s" podCreationTimestamp="2025-11-23 07:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:55.597728112 +0000 UTC m=+879.295132911" watchObservedRunningTime="2025-11-23 07:04:55.600099799 +0000 UTC m=+879.297504568" Nov 23 07:04:56 crc kubenswrapper[5028]: I1123 07:04:56.136026 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:56 crc kubenswrapper[5028]: I1123 07:04:56.149045 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2409cec3-0c1f-4c76-846f-2e1bf5b24258-memberlist\") pod \"speaker-hcgg8\" (UID: \"2409cec3-0c1f-4c76-846f-2e1bf5b24258\") " pod="metallb-system/speaker-hcgg8" Nov 23 07:04:56 crc kubenswrapper[5028]: I1123 07:04:56.250364 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-hcgg8" Nov 23 07:04:56 crc kubenswrapper[5028]: W1123 07:04:56.279867 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2409cec3_0c1f_4c76_846f_2e1bf5b24258.slice/crio-c0c62adf84e1162228a3bef651a373ee4d486cce0a438c98b8feae3087ec0836 WatchSource:0}: Error finding container c0c62adf84e1162228a3bef651a373ee4d486cce0a438c98b8feae3087ec0836: Status 404 returned error can't find the container with id c0c62adf84e1162228a3bef651a373ee4d486cce0a438c98b8feae3087ec0836 Nov 23 07:04:56 crc kubenswrapper[5028]: I1123 07:04:56.591002 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hcgg8" event={"ID":"2409cec3-0c1f-4c76-846f-2e1bf5b24258","Type":"ContainerStarted","Data":"6cf38ea2291ecfec3b715540e9f81feafdd2a5305f55fd371864458466d4466c"} Nov 23 07:04:56 crc kubenswrapper[5028]: I1123 07:04:56.591045 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hcgg8" event={"ID":"2409cec3-0c1f-4c76-846f-2e1bf5b24258","Type":"ContainerStarted","Data":"c0c62adf84e1162228a3bef651a373ee4d486cce0a438c98b8feae3087ec0836"} Nov 23 07:04:57 crc kubenswrapper[5028]: I1123 07:04:57.602923 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-hcgg8" event={"ID":"2409cec3-0c1f-4c76-846f-2e1bf5b24258","Type":"ContainerStarted","Data":"8fa26e241f66573ccdd15463d0f3b19b4afe0d88ea708cb26693e5507a94c062"} Nov 23 07:04:57 crc kubenswrapper[5028]: I1123 07:04:57.603278 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-hcgg8" Nov 23 07:04:57 crc kubenswrapper[5028]: I1123 07:04:57.618259 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-hcgg8" podStartSLOduration=3.618242794 podStartE2EDuration="3.618242794s" podCreationTimestamp="2025-11-23 07:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:04:57.61644736 +0000 UTC m=+881.313852139" watchObservedRunningTime="2025-11-23 07:04:57.618242794 +0000 UTC m=+881.315647573" Nov 23 07:05:02 crc kubenswrapper[5028]: I1123 07:05:02.641739 5028 generic.go:334] "Generic (PLEG): container finished" podID="b9813473-eba8-4ab8-9778-1e96817ebcc7" containerID="50fe3a76dd46f7129c6907b0c97c594d3809c5c4c9c8a825d1bc437d5aac87e9" exitCode=0 Nov 23 07:05:02 crc kubenswrapper[5028]: I1123 07:05:02.641849 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerDied","Data":"50fe3a76dd46f7129c6907b0c97c594d3809c5c4c9c8a825d1bc437d5aac87e9"} Nov 23 07:05:02 crc kubenswrapper[5028]: I1123 07:05:02.644641 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" event={"ID":"99cd1f08-746f-4db3-bd8f-2505bee5ce57","Type":"ContainerStarted","Data":"c5d6e2f463bb163854d03b4fb5aebe2a71ef8dba8981a3fed4dd43bf44803646"} Nov 23 07:05:02 crc kubenswrapper[5028]: I1123 07:05:02.644962 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:05:02 crc kubenswrapper[5028]: I1123 07:05:02.685580 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" podStartSLOduration=1.532843666 podStartE2EDuration="8.6855598s" podCreationTimestamp="2025-11-23 07:04:54 +0000 UTC" firstStartedPulling="2025-11-23 07:04:55.00015799 +0000 UTC m=+878.697562769" lastFinishedPulling="2025-11-23 07:05:02.152874124 +0000 UTC m=+885.850278903" observedRunningTime="2025-11-23 07:05:02.681229475 +0000 UTC m=+886.378634244" watchObservedRunningTime="2025-11-23 07:05:02.6855598 +0000 UTC m=+886.382964579" Nov 23 07:05:03 crc kubenswrapper[5028]: I1123 07:05:03.652476 5028 generic.go:334] "Generic (PLEG): container finished" podID="b9813473-eba8-4ab8-9778-1e96817ebcc7" containerID="7fb4766d20292972dfa6dd23bbaf9f2e2ca3660987697e8734624189a85ffdfc" exitCode=0 Nov 23 07:05:03 crc kubenswrapper[5028]: I1123 07:05:03.652520 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerDied","Data":"7fb4766d20292972dfa6dd23bbaf9f2e2ca3660987697e8734624189a85ffdfc"} Nov 23 07:05:04 crc kubenswrapper[5028]: I1123 07:05:04.660106 5028 generic.go:334] "Generic (PLEG): container finished" podID="b9813473-eba8-4ab8-9778-1e96817ebcc7" containerID="77285bb5e4f5a02a2eead5b4b5c17771575d123beb65f7299277e948491bafac" exitCode=0 Nov 23 07:05:04 crc kubenswrapper[5028]: I1123 07:05:04.660223 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerDied","Data":"77285bb5e4f5a02a2eead5b4b5c17771575d123beb65f7299277e948491bafac"} Nov 23 07:05:05 crc kubenswrapper[5028]: I1123 07:05:05.668778 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"6ef79ad2087f574ea16d081de4ff90081b1f4f3e81c485fb20a56f4302496c1c"} Nov 23 07:05:05 crc kubenswrapper[5028]: I1123 07:05:05.669156 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"8eabc78a9f54b8593a14fd3d2e3a237e171104b9d7b4edf8841d4658d8ae8871"} Nov 23 07:05:05 crc kubenswrapper[5028]: I1123 07:05:05.669175 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"319631f8a81156759987200880be0af56f0bae6dbdfe266b2ac8d6c87aae1ac6"} Nov 23 07:05:05 crc kubenswrapper[5028]: I1123 07:05:05.669187 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"9408544b1e62b117fcc833072aa16ebbed83c9ac8886165d63fadb64a881b7d5"} Nov 23 07:05:05 crc kubenswrapper[5028]: I1123 07:05:05.669197 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"bbf8f649d304c4c68a28eef867f30028dd08445251068086707d8adac856148d"} Nov 23 07:05:06 crc kubenswrapper[5028]: I1123 07:05:06.255500 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-hcgg8" Nov 23 07:05:06 crc kubenswrapper[5028]: I1123 07:05:06.677739 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-b8wkk" event={"ID":"b9813473-eba8-4ab8-9778-1e96817ebcc7","Type":"ContainerStarted","Data":"414ac5ec19f0e53138d37f939447200f795421512e5f1a42f97760b9cda35319"} Nov 23 07:05:06 crc kubenswrapper[5028]: I1123 07:05:06.677928 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:05:06 crc kubenswrapper[5028]: I1123 07:05:06.713906 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-b8wkk" podStartSLOduration=5.9397263670000005 podStartE2EDuration="12.713891733s" podCreationTimestamp="2025-11-23 07:04:54 +0000 UTC" firstStartedPulling="2025-11-23 07:04:55.402438763 +0000 UTC m=+879.099843552" lastFinishedPulling="2025-11-23 07:05:02.176604139 +0000 UTC m=+885.874008918" observedRunningTime="2025-11-23 07:05:06.711504105 +0000 UTC m=+890.408908894" watchObservedRunningTime="2025-11-23 07:05:06.713891733 +0000 UTC m=+890.411296512" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.171151 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5"] Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.173226 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.177713 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.181496 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5"] Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.320626 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvxt\" (UniqueName: \"kubernetes.io/projected/fefe2539-3b32-4043-82e8-68ee18b37878-kube-api-access-5nvxt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.320753 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.320819 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.421490 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.421562 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nvxt\" (UniqueName: \"kubernetes.io/projected/fefe2539-3b32-4043-82e8-68ee18b37878-kube-api-access-5nvxt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.421610 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.422184 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.422226 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.441341 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nvxt\" (UniqueName: \"kubernetes.io/projected/fefe2539-3b32-4043-82e8-68ee18b37878-kube-api-access-5nvxt\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.492523 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:08 crc kubenswrapper[5028]: I1123 07:05:08.895078 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5"] Nov 23 07:05:08 crc kubenswrapper[5028]: W1123 07:05:08.899377 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfefe2539_3b32_4043_82e8_68ee18b37878.slice/crio-647ac18198b4d06525d8941ddfa0e7faa9a2267611f484fb669671638b2f7f79 WatchSource:0}: Error finding container 647ac18198b4d06525d8941ddfa0e7faa9a2267611f484fb669671638b2f7f79: Status 404 returned error can't find the container with id 647ac18198b4d06525d8941ddfa0e7faa9a2267611f484fb669671638b2f7f79 Nov 23 07:05:09 crc kubenswrapper[5028]: I1123 07:05:09.694536 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" event={"ID":"fefe2539-3b32-4043-82e8-68ee18b37878","Type":"ContainerStarted","Data":"98e55e96ca1030ba946f36e5e05bbcabcf0242b64d040e2069fe1aae75505bde"} Nov 23 07:05:09 crc kubenswrapper[5028]: I1123 07:05:09.694890 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" event={"ID":"fefe2539-3b32-4043-82e8-68ee18b37878","Type":"ContainerStarted","Data":"647ac18198b4d06525d8941ddfa0e7faa9a2267611f484fb669671638b2f7f79"} Nov 23 07:05:10 crc kubenswrapper[5028]: I1123 07:05:10.288755 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:05:10 crc kubenswrapper[5028]: I1123 07:05:10.347030 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:05:10 crc kubenswrapper[5028]: I1123 07:05:10.701641 5028 generic.go:334] "Generic (PLEG): container finished" podID="fefe2539-3b32-4043-82e8-68ee18b37878" containerID="98e55e96ca1030ba946f36e5e05bbcabcf0242b64d040e2069fe1aae75505bde" exitCode=0 Nov 23 07:05:10 crc kubenswrapper[5028]: I1123 07:05:10.701700 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" event={"ID":"fefe2539-3b32-4043-82e8-68ee18b37878","Type":"ContainerDied","Data":"98e55e96ca1030ba946f36e5e05bbcabcf0242b64d040e2069fe1aae75505bde"} Nov 23 07:05:14 crc kubenswrapper[5028]: I1123 07:05:14.678587 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6nxm7" Nov 23 07:05:14 crc kubenswrapper[5028]: I1123 07:05:14.730652 5028 generic.go:334] "Generic (PLEG): container finished" podID="fefe2539-3b32-4043-82e8-68ee18b37878" containerID="ab7e0157c37b084725ade907fda4ea3a5e6f3df8c98a530b2af5217de4cd5539" exitCode=0 Nov 23 07:05:14 crc kubenswrapper[5028]: I1123 07:05:14.730704 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" event={"ID":"fefe2539-3b32-4043-82e8-68ee18b37878","Type":"ContainerDied","Data":"ab7e0157c37b084725ade907fda4ea3a5e6f3df8c98a530b2af5217de4cd5539"} Nov 23 07:05:14 crc kubenswrapper[5028]: I1123 07:05:14.805484 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-hmhtp" Nov 23 07:05:15 crc kubenswrapper[5028]: I1123 07:05:15.291476 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-b8wkk" Nov 23 07:05:15 crc kubenswrapper[5028]: I1123 07:05:15.742382 5028 generic.go:334] "Generic (PLEG): container finished" podID="fefe2539-3b32-4043-82e8-68ee18b37878" containerID="d90cf6cbef42b59daf76671bb61365cf76c699dc9333c4992577c2878c2ac128" exitCode=0 Nov 23 07:05:15 crc kubenswrapper[5028]: I1123 07:05:15.742590 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" event={"ID":"fefe2539-3b32-4043-82e8-68ee18b37878","Type":"ContainerDied","Data":"d90cf6cbef42b59daf76671bb61365cf76c699dc9333c4992577c2878c2ac128"} Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.361573 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.479041 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-bundle\") pod \"fefe2539-3b32-4043-82e8-68ee18b37878\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.479084 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-util\") pod \"fefe2539-3b32-4043-82e8-68ee18b37878\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.479185 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nvxt\" (UniqueName: \"kubernetes.io/projected/fefe2539-3b32-4043-82e8-68ee18b37878-kube-api-access-5nvxt\") pod \"fefe2539-3b32-4043-82e8-68ee18b37878\" (UID: \"fefe2539-3b32-4043-82e8-68ee18b37878\") " Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.479909 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-bundle" (OuterVolumeSpecName: "bundle") pod "fefe2539-3b32-4043-82e8-68ee18b37878" (UID: "fefe2539-3b32-4043-82e8-68ee18b37878"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.489272 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-util" (OuterVolumeSpecName: "util") pod "fefe2539-3b32-4043-82e8-68ee18b37878" (UID: "fefe2539-3b32-4043-82e8-68ee18b37878"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.491182 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fefe2539-3b32-4043-82e8-68ee18b37878-kube-api-access-5nvxt" (OuterVolumeSpecName: "kube-api-access-5nvxt") pod "fefe2539-3b32-4043-82e8-68ee18b37878" (UID: "fefe2539-3b32-4043-82e8-68ee18b37878"). InnerVolumeSpecName "kube-api-access-5nvxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.581822 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nvxt\" (UniqueName: \"kubernetes.io/projected/fefe2539-3b32-4043-82e8-68ee18b37878-kube-api-access-5nvxt\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.581993 5028 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.582016 5028 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fefe2539-3b32-4043-82e8-68ee18b37878-util\") on node \"crc\" DevicePath \"\"" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.769175 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" event={"ID":"fefe2539-3b32-4043-82e8-68ee18b37878","Type":"ContainerDied","Data":"647ac18198b4d06525d8941ddfa0e7faa9a2267611f484fb669671638b2f7f79"} Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.769513 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="647ac18198b4d06525d8941ddfa0e7faa9a2267611f484fb669671638b2f7f79" Nov 23 07:05:18 crc kubenswrapper[5028]: I1123 07:05:18.769304 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.223521 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s"] Nov 23 07:05:26 crc kubenswrapper[5028]: E1123 07:05:26.224322 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="util" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.224336 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="util" Nov 23 07:05:26 crc kubenswrapper[5028]: E1123 07:05:26.224351 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="pull" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.224356 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="pull" Nov 23 07:05:26 crc kubenswrapper[5028]: E1123 07:05:26.224373 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="extract" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.224380 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="extract" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.224484 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fefe2539-3b32-4043-82e8-68ee18b37878" containerName="extract" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.224868 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.226591 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.226718 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.228224 5028 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-pdn2c" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.246597 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s"] Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.393270 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggtz\" (UniqueName: \"kubernetes.io/projected/d7a6b7c8-173e-463c-8a5e-29bcf81e153f-kube-api-access-pggtz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fbk7s\" (UID: \"d7a6b7c8-173e-463c-8a5e-29bcf81e153f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.393371 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7a6b7c8-173e-463c-8a5e-29bcf81e153f-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fbk7s\" (UID: \"d7a6b7c8-173e-463c-8a5e-29bcf81e153f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.495168 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggtz\" (UniqueName: \"kubernetes.io/projected/d7a6b7c8-173e-463c-8a5e-29bcf81e153f-kube-api-access-pggtz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fbk7s\" (UID: \"d7a6b7c8-173e-463c-8a5e-29bcf81e153f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.495239 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7a6b7c8-173e-463c-8a5e-29bcf81e153f-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fbk7s\" (UID: \"d7a6b7c8-173e-463c-8a5e-29bcf81e153f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.495882 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7a6b7c8-173e-463c-8a5e-29bcf81e153f-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fbk7s\" (UID: \"d7a6b7c8-173e-463c-8a5e-29bcf81e153f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.524127 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggtz\" (UniqueName: \"kubernetes.io/projected/d7a6b7c8-173e-463c-8a5e-29bcf81e153f-kube-api-access-pggtz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-fbk7s\" (UID: \"d7a6b7c8-173e-463c-8a5e-29bcf81e153f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:26 crc kubenswrapper[5028]: I1123 07:05:26.544538 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" Nov 23 07:05:27 crc kubenswrapper[5028]: I1123 07:05:27.016258 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s"] Nov 23 07:05:27 crc kubenswrapper[5028]: I1123 07:05:27.832442 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" event={"ID":"d7a6b7c8-173e-463c-8a5e-29bcf81e153f","Type":"ContainerStarted","Data":"e30aa3e47291102374aa76cf7fd2f79f87aaa3526e5c789a29d798354b09b7b9"} Nov 23 07:05:35 crc kubenswrapper[5028]: I1123 07:05:35.888115 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" event={"ID":"d7a6b7c8-173e-463c-8a5e-29bcf81e153f","Type":"ContainerStarted","Data":"b9748b2086a9cf28919a6085ffcbd644cd1b842cf3f7269e1a1bf850128627f3"} Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.659185 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-fbk7s" podStartSLOduration=4.906282293 podStartE2EDuration="12.65915209s" podCreationTimestamp="2025-11-23 07:05:26 +0000 UTC" firstStartedPulling="2025-11-23 07:05:27.031622998 +0000 UTC m=+910.729027777" lastFinishedPulling="2025-11-23 07:05:34.784492785 +0000 UTC m=+918.481897574" observedRunningTime="2025-11-23 07:05:35.910646013 +0000 UTC m=+919.608050792" watchObservedRunningTime="2025-11-23 07:05:38.65915209 +0000 UTC m=+922.356556859" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.661356 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-h2b72"] Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.662152 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.665506 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.665895 5028 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xckff" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.666762 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.690532 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-h2b72"] Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.787484 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thbx5\" (UniqueName: \"kubernetes.io/projected/837ef15b-d82d-40d7-b355-96a88642875a-kube-api-access-thbx5\") pod \"cert-manager-webhook-f4fb5df64-h2b72\" (UID: \"837ef15b-d82d-40d7-b355-96a88642875a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.787891 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/837ef15b-d82d-40d7-b355-96a88642875a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-h2b72\" (UID: \"837ef15b-d82d-40d7-b355-96a88642875a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.889202 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/837ef15b-d82d-40d7-b355-96a88642875a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-h2b72\" (UID: \"837ef15b-d82d-40d7-b355-96a88642875a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.889289 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thbx5\" (UniqueName: \"kubernetes.io/projected/837ef15b-d82d-40d7-b355-96a88642875a-kube-api-access-thbx5\") pod \"cert-manager-webhook-f4fb5df64-h2b72\" (UID: \"837ef15b-d82d-40d7-b355-96a88642875a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.909695 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/837ef15b-d82d-40d7-b355-96a88642875a-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-h2b72\" (UID: \"837ef15b-d82d-40d7-b355-96a88642875a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.916934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thbx5\" (UniqueName: \"kubernetes.io/projected/837ef15b-d82d-40d7-b355-96a88642875a-kube-api-access-thbx5\") pod \"cert-manager-webhook-f4fb5df64-h2b72\" (UID: \"837ef15b-d82d-40d7-b355-96a88642875a\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:38 crc kubenswrapper[5028]: I1123 07:05:38.982964 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.434823 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-h2b72"] Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.796934 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd"] Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.798016 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.800016 5028 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8wc9s" Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.810185 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd"] Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.909479 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j82m6\" (UniqueName: \"kubernetes.io/projected/d3928449-d408-40f9-961b-952b37cad330-kube-api-access-j82m6\") pod \"cert-manager-cainjector-855d9ccff4-7bwsd\" (UID: \"d3928449-d408-40f9-961b-952b37cad330\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.909630 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3928449-d408-40f9-961b-952b37cad330-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7bwsd\" (UID: \"d3928449-d408-40f9-961b-952b37cad330\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:39 crc kubenswrapper[5028]: I1123 07:05:39.911619 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" event={"ID":"837ef15b-d82d-40d7-b355-96a88642875a","Type":"ContainerStarted","Data":"811aa7e7edd36ae739236de75a9cbf5d36fb9684c370b9bc4a65f6244e33f2e6"} Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.011859 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j82m6\" (UniqueName: \"kubernetes.io/projected/d3928449-d408-40f9-961b-952b37cad330-kube-api-access-j82m6\") pod \"cert-manager-cainjector-855d9ccff4-7bwsd\" (UID: \"d3928449-d408-40f9-961b-952b37cad330\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.011965 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3928449-d408-40f9-961b-952b37cad330-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7bwsd\" (UID: \"d3928449-d408-40f9-961b-952b37cad330\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.029657 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3928449-d408-40f9-961b-952b37cad330-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-7bwsd\" (UID: \"d3928449-d408-40f9-961b-952b37cad330\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.040841 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j82m6\" (UniqueName: \"kubernetes.io/projected/d3928449-d408-40f9-961b-952b37cad330-kube-api-access-j82m6\") pod \"cert-manager-cainjector-855d9ccff4-7bwsd\" (UID: \"d3928449-d408-40f9-961b-952b37cad330\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.120323 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.315681 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd"] Nov 23 07:05:40 crc kubenswrapper[5028]: I1123 07:05:40.924104 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" event={"ID":"d3928449-d408-40f9-961b-952b37cad330","Type":"ContainerStarted","Data":"b6a55ae76e306155dedfcf12a3db1b340f707ec51a36ec53ec936869ab51d7d0"} Nov 23 07:05:46 crc kubenswrapper[5028]: I1123 07:05:46.966510 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" event={"ID":"d3928449-d408-40f9-961b-952b37cad330","Type":"ContainerStarted","Data":"12f880e6b28ecfe10796fce14d4872093c5663b72bf052187853250cb918ecac"} Nov 23 07:05:46 crc kubenswrapper[5028]: I1123 07:05:46.968271 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" event={"ID":"837ef15b-d82d-40d7-b355-96a88642875a","Type":"ContainerStarted","Data":"dae9c657825cc3c98972b4793f1b97235daa2557b8c6b9cfe4f1652142dbe98c"} Nov 23 07:05:46 crc kubenswrapper[5028]: I1123 07:05:46.968495 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:46 crc kubenswrapper[5028]: I1123 07:05:46.983521 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-7bwsd" podStartSLOduration=1.5911064430000001 podStartE2EDuration="7.983503826s" podCreationTimestamp="2025-11-23 07:05:39 +0000 UTC" firstStartedPulling="2025-11-23 07:05:40.319623795 +0000 UTC m=+924.017028574" lastFinishedPulling="2025-11-23 07:05:46.712021188 +0000 UTC m=+930.409425957" observedRunningTime="2025-11-23 07:05:46.980350919 +0000 UTC m=+930.677755698" watchObservedRunningTime="2025-11-23 07:05:46.983503826 +0000 UTC m=+930.680908605" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.031442 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" podStartSLOduration=3.7437753320000002 podStartE2EDuration="11.03142151s" podCreationTimestamp="2025-11-23 07:05:38 +0000 UTC" firstStartedPulling="2025-11-23 07:05:39.446504967 +0000 UTC m=+923.143909746" lastFinishedPulling="2025-11-23 07:05:46.734151145 +0000 UTC m=+930.431555924" observedRunningTime="2025-11-23 07:05:47.000789835 +0000 UTC m=+930.698194624" watchObservedRunningTime="2025-11-23 07:05:49.03142151 +0000 UTC m=+932.728826299" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.035330 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tld9r"] Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.036482 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.039580 5028 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7ndh2" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.073187 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tld9r"] Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.163884 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8240319-4d2d-4d44-87bd-96f5dba0a49c-bound-sa-token\") pod \"cert-manager-86cb77c54b-tld9r\" (UID: \"f8240319-4d2d-4d44-87bd-96f5dba0a49c\") " pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.163994 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srwdt\" (UniqueName: \"kubernetes.io/projected/f8240319-4d2d-4d44-87bd-96f5dba0a49c-kube-api-access-srwdt\") pod \"cert-manager-86cb77c54b-tld9r\" (UID: \"f8240319-4d2d-4d44-87bd-96f5dba0a49c\") " pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.265653 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8240319-4d2d-4d44-87bd-96f5dba0a49c-bound-sa-token\") pod \"cert-manager-86cb77c54b-tld9r\" (UID: \"f8240319-4d2d-4d44-87bd-96f5dba0a49c\") " pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.265715 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srwdt\" (UniqueName: \"kubernetes.io/projected/f8240319-4d2d-4d44-87bd-96f5dba0a49c-kube-api-access-srwdt\") pod \"cert-manager-86cb77c54b-tld9r\" (UID: \"f8240319-4d2d-4d44-87bd-96f5dba0a49c\") " pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.293354 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8240319-4d2d-4d44-87bd-96f5dba0a49c-bound-sa-token\") pod \"cert-manager-86cb77c54b-tld9r\" (UID: \"f8240319-4d2d-4d44-87bd-96f5dba0a49c\") " pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.293895 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srwdt\" (UniqueName: \"kubernetes.io/projected/f8240319-4d2d-4d44-87bd-96f5dba0a49c-kube-api-access-srwdt\") pod \"cert-manager-86cb77c54b-tld9r\" (UID: \"f8240319-4d2d-4d44-87bd-96f5dba0a49c\") " pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.382467 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-tld9r" Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.644452 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tld9r"] Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.990241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-tld9r" event={"ID":"f8240319-4d2d-4d44-87bd-96f5dba0a49c","Type":"ContainerStarted","Data":"d59a4aee469cdbe45d0db3620b4d9729114517b9be92f41dfb241cf4df2a9e22"} Nov 23 07:05:49 crc kubenswrapper[5028]: I1123 07:05:49.990642 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-tld9r" event={"ID":"f8240319-4d2d-4d44-87bd-96f5dba0a49c","Type":"ContainerStarted","Data":"3d3dbb83b4822e1864b57d4c5950cf1b3f89e8cb0e178fc24f35e6630e0d2804"} Nov 23 07:05:50 crc kubenswrapper[5028]: I1123 07:05:50.013831 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-tld9r" podStartSLOduration=1.01379218 podStartE2EDuration="1.01379218s" podCreationTimestamp="2025-11-23 07:05:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:05:50.008674056 +0000 UTC m=+933.706078845" watchObservedRunningTime="2025-11-23 07:05:50.01379218 +0000 UTC m=+933.711196969" Nov 23 07:05:53 crc kubenswrapper[5028]: I1123 07:05:53.984649 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-h2b72" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.020247 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fs9dw"] Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.021541 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.023594 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.027203 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-mwnxf" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.032052 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.075095 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fs9dw"] Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.077643 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqpp\" (UniqueName: \"kubernetes.io/projected/108d10bc-ef38-4875-b8c1-776d7b25938e-kube-api-access-7rqpp\") pod \"openstack-operator-index-fs9dw\" (UID: \"108d10bc-ef38-4875-b8c1-776d7b25938e\") " pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.179380 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqpp\" (UniqueName: \"kubernetes.io/projected/108d10bc-ef38-4875-b8c1-776d7b25938e-kube-api-access-7rqpp\") pod \"openstack-operator-index-fs9dw\" (UID: \"108d10bc-ef38-4875-b8c1-776d7b25938e\") " pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.200067 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqpp\" (UniqueName: \"kubernetes.io/projected/108d10bc-ef38-4875-b8c1-776d7b25938e-kube-api-access-7rqpp\") pod \"openstack-operator-index-fs9dw\" (UID: \"108d10bc-ef38-4875-b8c1-776d7b25938e\") " pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.339781 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:05:57 crc kubenswrapper[5028]: I1123 07:05:57.542066 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fs9dw"] Nov 23 07:05:58 crc kubenswrapper[5028]: I1123 07:05:58.045905 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fs9dw" event={"ID":"108d10bc-ef38-4875-b8c1-776d7b25938e","Type":"ContainerStarted","Data":"2908fef43b2894c2dfe3f210e3260caaf7320fb7c5d3a58b42e502a0723398bb"} Nov 23 07:05:59 crc kubenswrapper[5028]: I1123 07:05:59.062207 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fs9dw" event={"ID":"108d10bc-ef38-4875-b8c1-776d7b25938e","Type":"ContainerStarted","Data":"5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4"} Nov 23 07:05:59 crc kubenswrapper[5028]: I1123 07:05:59.078873 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fs9dw" podStartSLOduration=0.905494582 podStartE2EDuration="2.078848796s" podCreationTimestamp="2025-11-23 07:05:57 +0000 UTC" firstStartedPulling="2025-11-23 07:05:57.558468322 +0000 UTC m=+941.255873101" lastFinishedPulling="2025-11-23 07:05:58.731822516 +0000 UTC m=+942.429227315" observedRunningTime="2025-11-23 07:05:59.076224033 +0000 UTC m=+942.773628812" watchObservedRunningTime="2025-11-23 07:05:59.078848796 +0000 UTC m=+942.776253575" Nov 23 07:06:00 crc kubenswrapper[5028]: I1123 07:06:00.393170 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fs9dw"] Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.002794 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rc4v5"] Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.003821 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.015702 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rc4v5"] Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.075508 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fs9dw" podUID="108d10bc-ef38-4875-b8c1-776d7b25938e" containerName="registry-server" containerID="cri-o://5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4" gracePeriod=2 Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.141215 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhgn\" (UniqueName: \"kubernetes.io/projected/f2456c36-742d-4a62-985b-2155c0caab72-kube-api-access-wmhgn\") pod \"openstack-operator-index-rc4v5\" (UID: \"f2456c36-742d-4a62-985b-2155c0caab72\") " pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.242025 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhgn\" (UniqueName: \"kubernetes.io/projected/f2456c36-742d-4a62-985b-2155c0caab72-kube-api-access-wmhgn\") pod \"openstack-operator-index-rc4v5\" (UID: \"f2456c36-742d-4a62-985b-2155c0caab72\") " pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.270418 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhgn\" (UniqueName: \"kubernetes.io/projected/f2456c36-742d-4a62-985b-2155c0caab72-kube-api-access-wmhgn\") pod \"openstack-operator-index-rc4v5\" (UID: \"f2456c36-742d-4a62-985b-2155c0caab72\") " pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.367023 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.461847 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.545481 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rqpp\" (UniqueName: \"kubernetes.io/projected/108d10bc-ef38-4875-b8c1-776d7b25938e-kube-api-access-7rqpp\") pod \"108d10bc-ef38-4875-b8c1-776d7b25938e\" (UID: \"108d10bc-ef38-4875-b8c1-776d7b25938e\") " Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.552520 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108d10bc-ef38-4875-b8c1-776d7b25938e-kube-api-access-7rqpp" (OuterVolumeSpecName: "kube-api-access-7rqpp") pod "108d10bc-ef38-4875-b8c1-776d7b25938e" (UID: "108d10bc-ef38-4875-b8c1-776d7b25938e"). InnerVolumeSpecName "kube-api-access-7rqpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.596178 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rc4v5"] Nov 23 07:06:01 crc kubenswrapper[5028]: I1123 07:06:01.647012 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rqpp\" (UniqueName: \"kubernetes.io/projected/108d10bc-ef38-4875-b8c1-776d7b25938e-kube-api-access-7rqpp\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.086416 5028 generic.go:334] "Generic (PLEG): container finished" podID="108d10bc-ef38-4875-b8c1-776d7b25938e" containerID="5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4" exitCode=0 Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.086493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fs9dw" event={"ID":"108d10bc-ef38-4875-b8c1-776d7b25938e","Type":"ContainerDied","Data":"5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4"} Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.086522 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fs9dw" Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.086608 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fs9dw" event={"ID":"108d10bc-ef38-4875-b8c1-776d7b25938e","Type":"ContainerDied","Data":"2908fef43b2894c2dfe3f210e3260caaf7320fb7c5d3a58b42e502a0723398bb"} Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.086642 5028 scope.go:117] "RemoveContainer" containerID="5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4" Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.092421 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rc4v5" event={"ID":"f2456c36-742d-4a62-985b-2155c0caab72","Type":"ContainerStarted","Data":"afd79ac3558c2aebf7d5601af762c0dab4732152db752b7949b43bae640d5407"} Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.115477 5028 scope.go:117] "RemoveContainer" containerID="5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4" Nov 23 07:06:02 crc kubenswrapper[5028]: E1123 07:06:02.117424 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4\": container with ID starting with 5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4 not found: ID does not exist" containerID="5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4" Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.117466 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4"} err="failed to get container status \"5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4\": rpc error: code = NotFound desc = could not find container \"5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4\": container with ID starting with 5b2adbbd2320af3376ea5201227b9fa0bc2796702f7ecb1a8885ecc3681826a4 not found: ID does not exist" Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.121767 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fs9dw"] Nov 23 07:06:02 crc kubenswrapper[5028]: I1123 07:06:02.125052 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fs9dw"] Nov 23 07:06:03 crc kubenswrapper[5028]: I1123 07:06:03.061625 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108d10bc-ef38-4875-b8c1-776d7b25938e" path="/var/lib/kubelet/pods/108d10bc-ef38-4875-b8c1-776d7b25938e/volumes" Nov 23 07:06:03 crc kubenswrapper[5028]: I1123 07:06:03.103487 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rc4v5" event={"ID":"f2456c36-742d-4a62-985b-2155c0caab72","Type":"ContainerStarted","Data":"444445389cb9f367f527e5ab534aaee90a3192fe5af4672098f23a2afa3d7a58"} Nov 23 07:06:03 crc kubenswrapper[5028]: I1123 07:06:03.126262 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rc4v5" podStartSLOduration=2.678958549 podStartE2EDuration="3.126225923s" podCreationTimestamp="2025-11-23 07:06:00 +0000 UTC" firstStartedPulling="2025-11-23 07:06:01.607835947 +0000 UTC m=+945.305240726" lastFinishedPulling="2025-11-23 07:06:02.055103321 +0000 UTC m=+945.752508100" observedRunningTime="2025-11-23 07:06:03.120864343 +0000 UTC m=+946.818269122" watchObservedRunningTime="2025-11-23 07:06:03.126225923 +0000 UTC m=+946.823630722" Nov 23 07:06:11 crc kubenswrapper[5028]: I1123 07:06:11.367334 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:11 crc kubenswrapper[5028]: I1123 07:06:11.367859 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:11 crc kubenswrapper[5028]: I1123 07:06:11.399836 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:12 crc kubenswrapper[5028]: I1123 07:06:12.210497 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rc4v5" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.060942 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw"] Nov 23 07:06:18 crc kubenswrapper[5028]: E1123 07:06:18.062088 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108d10bc-ef38-4875-b8c1-776d7b25938e" containerName="registry-server" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.062116 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="108d10bc-ef38-4875-b8c1-776d7b25938e" containerName="registry-server" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.062307 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="108d10bc-ef38-4875-b8c1-776d7b25938e" containerName="registry-server" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.063701 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.067511 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xpk6z" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.082727 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw"] Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.113013 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.113472 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjw72\" (UniqueName: \"kubernetes.io/projected/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-kube-api-access-hjw72\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.113580 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.215616 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjw72\" (UniqueName: \"kubernetes.io/projected/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-kube-api-access-hjw72\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.215694 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.215797 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.216363 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.216458 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.249819 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjw72\" (UniqueName: \"kubernetes.io/projected/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-kube-api-access-hjw72\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.387572 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:18 crc kubenswrapper[5028]: W1123 07:06:18.699012 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a79fbbe_cddf_4c2c_aaeb_1ccf2e3f0065.slice/crio-a8ee94d9c9d9e1f5806e12ccc6831c01b40d9ea507f05409bf104c6f71e29a7b WatchSource:0}: Error finding container a8ee94d9c9d9e1f5806e12ccc6831c01b40d9ea507f05409bf104c6f71e29a7b: Status 404 returned error can't find the container with id a8ee94d9c9d9e1f5806e12ccc6831c01b40d9ea507f05409bf104c6f71e29a7b Nov 23 07:06:18 crc kubenswrapper[5028]: I1123 07:06:18.699622 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw"] Nov 23 07:06:19 crc kubenswrapper[5028]: I1123 07:06:19.222407 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" event={"ID":"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065","Type":"ContainerStarted","Data":"a8ee94d9c9d9e1f5806e12ccc6831c01b40d9ea507f05409bf104c6f71e29a7b"} Nov 23 07:06:20 crc kubenswrapper[5028]: I1123 07:06:20.232691 5028 generic.go:334] "Generic (PLEG): container finished" podID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerID="a549788a074271af98f2f5f0e9c88e8c4bfb5681f5cf2ce1f8b06b336efad6b4" exitCode=0 Nov 23 07:06:20 crc kubenswrapper[5028]: I1123 07:06:20.232750 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" event={"ID":"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065","Type":"ContainerDied","Data":"a549788a074271af98f2f5f0e9c88e8c4bfb5681f5cf2ce1f8b06b336efad6b4"} Nov 23 07:06:25 crc kubenswrapper[5028]: I1123 07:06:25.270095 5028 generic.go:334] "Generic (PLEG): container finished" podID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerID="aa00fa963cb88114d8ce2fc168c51592954b15437f3a891363ee00305d910ef4" exitCode=0 Nov 23 07:06:25 crc kubenswrapper[5028]: I1123 07:06:25.270155 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" event={"ID":"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065","Type":"ContainerDied","Data":"aa00fa963cb88114d8ce2fc168c51592954b15437f3a891363ee00305d910ef4"} Nov 23 07:06:26 crc kubenswrapper[5028]: I1123 07:06:26.283601 5028 generic.go:334] "Generic (PLEG): container finished" podID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerID="3f0d1b642d2a6ac438cf20eea7e3404ea56c1e04209448213269ab0ae7d54f19" exitCode=0 Nov 23 07:06:26 crc kubenswrapper[5028]: I1123 07:06:26.283689 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" event={"ID":"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065","Type":"ContainerDied","Data":"3f0d1b642d2a6ac438cf20eea7e3404ea56c1e04209448213269ab0ae7d54f19"} Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.583515 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.767267 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-bundle\") pod \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.767479 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjw72\" (UniqueName: \"kubernetes.io/projected/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-kube-api-access-hjw72\") pod \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.767517 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-util\") pod \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\" (UID: \"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065\") " Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.768612 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-bundle" (OuterVolumeSpecName: "bundle") pod "8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" (UID: "8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.774818 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-kube-api-access-hjw72" (OuterVolumeSpecName: "kube-api-access-hjw72") pod "8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" (UID: "8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065"). InnerVolumeSpecName "kube-api-access-hjw72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.778844 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-util" (OuterVolumeSpecName: "util") pod "8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" (UID: "8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.869038 5028 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.869077 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjw72\" (UniqueName: \"kubernetes.io/projected/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-kube-api-access-hjw72\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:27 crc kubenswrapper[5028]: I1123 07:06:27.869092 5028 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065-util\") on node \"crc\" DevicePath \"\"" Nov 23 07:06:28 crc kubenswrapper[5028]: I1123 07:06:28.307918 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" event={"ID":"8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065","Type":"ContainerDied","Data":"a8ee94d9c9d9e1f5806e12ccc6831c01b40d9ea507f05409bf104c6f71e29a7b"} Nov 23 07:06:28 crc kubenswrapper[5028]: I1123 07:06:28.308048 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ee94d9c9d9e1f5806e12ccc6831c01b40d9ea507f05409bf104c6f71e29a7b" Nov 23 07:06:28 crc kubenswrapper[5028]: I1123 07:06:28.308010 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.875767 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l"] Nov 23 07:06:35 crc kubenswrapper[5028]: E1123 07:06:35.878421 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="pull" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.878510 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="pull" Nov 23 07:06:35 crc kubenswrapper[5028]: E1123 07:06:35.878568 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="util" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.878627 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="util" Nov 23 07:06:35 crc kubenswrapper[5028]: E1123 07:06:35.878871 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="extract" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.878963 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="extract" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.879166 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065" containerName="extract" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.880024 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.885288 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-ksn47" Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.906087 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l"] Nov 23 07:06:35 crc kubenswrapper[5028]: I1123 07:06:35.984118 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtkz\" (UniqueName: \"kubernetes.io/projected/2f8bf0fc-0cb2-4726-96ce-c378818da6dd-kube-api-access-4rtkz\") pod \"openstack-operator-controller-operator-8486c7f98b-9xt7l\" (UID: \"2f8bf0fc-0cb2-4726-96ce-c378818da6dd\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:06:36 crc kubenswrapper[5028]: I1123 07:06:36.085606 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtkz\" (UniqueName: \"kubernetes.io/projected/2f8bf0fc-0cb2-4726-96ce-c378818da6dd-kube-api-access-4rtkz\") pod \"openstack-operator-controller-operator-8486c7f98b-9xt7l\" (UID: \"2f8bf0fc-0cb2-4726-96ce-c378818da6dd\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:06:36 crc kubenswrapper[5028]: I1123 07:06:36.104889 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtkz\" (UniqueName: \"kubernetes.io/projected/2f8bf0fc-0cb2-4726-96ce-c378818da6dd-kube-api-access-4rtkz\") pod \"openstack-operator-controller-operator-8486c7f98b-9xt7l\" (UID: \"2f8bf0fc-0cb2-4726-96ce-c378818da6dd\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:06:36 crc kubenswrapper[5028]: I1123 07:06:36.196680 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:06:36 crc kubenswrapper[5028]: I1123 07:06:36.532033 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l"] Nov 23 07:06:37 crc kubenswrapper[5028]: I1123 07:06:37.366863 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" event={"ID":"2f8bf0fc-0cb2-4726-96ce-c378818da6dd","Type":"ContainerStarted","Data":"2de7b05650840816c7b51a440f8639a44477ba8274038c09ee131f7d315d8aac"} Nov 23 07:06:42 crc kubenswrapper[5028]: I1123 07:06:42.398467 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" event={"ID":"2f8bf0fc-0cb2-4726-96ce-c378818da6dd","Type":"ContainerStarted","Data":"b83669e302ea8800f54e52036fa2d501c2046e604fe826caa31d68ca8e178c78"} Nov 23 07:06:45 crc kubenswrapper[5028]: I1123 07:06:45.416457 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" event={"ID":"2f8bf0fc-0cb2-4726-96ce-c378818da6dd","Type":"ContainerStarted","Data":"49425f3f0753cdebaa192aceeb14dd364dd4dc431065c8ce58883541c655033c"} Nov 23 07:06:45 crc kubenswrapper[5028]: I1123 07:06:45.417004 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:06:45 crc kubenswrapper[5028]: I1123 07:06:45.453003 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" podStartSLOduration=1.803209554 podStartE2EDuration="10.452934715s" podCreationTimestamp="2025-11-23 07:06:35 +0000 UTC" firstStartedPulling="2025-11-23 07:06:36.549803375 +0000 UTC m=+980.247208154" lastFinishedPulling="2025-11-23 07:06:45.199528536 +0000 UTC m=+988.896933315" observedRunningTime="2025-11-23 07:06:45.447230156 +0000 UTC m=+989.144634945" watchObservedRunningTime="2025-11-23 07:06:45.452934715 +0000 UTC m=+989.150339494" Nov 23 07:06:46 crc kubenswrapper[5028]: I1123 07:06:46.199558 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-9xt7l" Nov 23 07:07:00 crc kubenswrapper[5028]: I1123 07:07:00.946841 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:07:00 crc kubenswrapper[5028]: I1123 07:07:00.947392 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.210533 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.212010 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.214444 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4d5l6" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.234477 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.235698 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.240095 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-n68zp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.242129 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.264892 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.268416 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.275412 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ch7mp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.280053 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.282712 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zhxf\" (UniqueName: \"kubernetes.io/projected/b4535624-ea6d-4e72-be76-c37915bcfe54-kube-api-access-4zhxf\") pod \"cinder-operator-controller-manager-6d8fd67bf7-bwdxx\" (UID: \"b4535624-ea6d-4e72-be76-c37915bcfe54\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.282860 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdcqr\" (UniqueName: \"kubernetes.io/projected/cfecab10-1421-49e5-9a36-f14bc9a61340-kube-api-access-qdcqr\") pod \"barbican-operator-controller-manager-7768f8c84f-zr9nj\" (UID: \"cfecab10-1421-49e5-9a36-f14bc9a61340\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.285999 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.287140 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.291070 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.295031 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-trr76" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.325154 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.326216 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.347719 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.351186 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jhx2x" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.379076 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.386685 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdcqr\" (UniqueName: \"kubernetes.io/projected/cfecab10-1421-49e5-9a36-f14bc9a61340-kube-api-access-qdcqr\") pod \"barbican-operator-controller-manager-7768f8c84f-zr9nj\" (UID: \"cfecab10-1421-49e5-9a36-f14bc9a61340\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.386750 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtcp\" (UniqueName: \"kubernetes.io/projected/76d87e89-eead-45c1-89b0-053b0e595751-kube-api-access-7jtcp\") pod \"designate-operator-controller-manager-56dfb6b67f-fr59d\" (UID: \"76d87e89-eead-45c1-89b0-053b0e595751\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.386777 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9jqb\" (UniqueName: \"kubernetes.io/projected/b042c881-3ca3-44d1-916a-1ed4205b66e1-kube-api-access-d9jqb\") pod \"glance-operator-controller-manager-8667fbf6f6-85pdk\" (UID: \"b042c881-3ca3-44d1-916a-1ed4205b66e1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.386797 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zhxf\" (UniqueName: \"kubernetes.io/projected/b4535624-ea6d-4e72-be76-c37915bcfe54-kube-api-access-4zhxf\") pod \"cinder-operator-controller-manager-6d8fd67bf7-bwdxx\" (UID: \"b4535624-ea6d-4e72-be76-c37915bcfe54\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.386877 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltnwj\" (UniqueName: \"kubernetes.io/projected/8ef3e26b-808d-455b-a88e-1fc7d5f81fc3-kube-api-access-ltnwj\") pod \"heat-operator-controller-manager-bf4c6585d-6cwkk\" (UID: \"8ef3e26b-808d-455b-a88e-1fc7d5f81fc3\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.404088 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.405153 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.409607 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mctf9" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.429961 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdcqr\" (UniqueName: \"kubernetes.io/projected/cfecab10-1421-49e5-9a36-f14bc9a61340-kube-api-access-qdcqr\") pod \"barbican-operator-controller-manager-7768f8c84f-zr9nj\" (UID: \"cfecab10-1421-49e5-9a36-f14bc9a61340\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.431767 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zhxf\" (UniqueName: \"kubernetes.io/projected/b4535624-ea6d-4e72-be76-c37915bcfe54-kube-api-access-4zhxf\") pod \"cinder-operator-controller-manager-6d8fd67bf7-bwdxx\" (UID: \"b4535624-ea6d-4e72-be76-c37915bcfe54\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.453796 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.472677 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.474643 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.484510 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.484637 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wg6mc" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.488535 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8np2k\" (UniqueName: \"kubernetes.io/projected/400d2d41-03cf-4d6d-966b-c1676ec373d6-kube-api-access-8np2k\") pod \"horizon-operator-controller-manager-5d86b44686-hscqp\" (UID: \"400d2d41-03cf-4d6d-966b-c1676ec373d6\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.488632 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltnwj\" (UniqueName: \"kubernetes.io/projected/8ef3e26b-808d-455b-a88e-1fc7d5f81fc3-kube-api-access-ltnwj\") pod \"heat-operator-controller-manager-bf4c6585d-6cwkk\" (UID: \"8ef3e26b-808d-455b-a88e-1fc7d5f81fc3\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.488685 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jtcp\" (UniqueName: \"kubernetes.io/projected/76d87e89-eead-45c1-89b0-053b0e595751-kube-api-access-7jtcp\") pod \"designate-operator-controller-manager-56dfb6b67f-fr59d\" (UID: \"76d87e89-eead-45c1-89b0-053b0e595751\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.488715 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9jqb\" (UniqueName: \"kubernetes.io/projected/b042c881-3ca3-44d1-916a-1ed4205b66e1-kube-api-access-d9jqb\") pod \"glance-operator-controller-manager-8667fbf6f6-85pdk\" (UID: \"b042c881-3ca3-44d1-916a-1ed4205b66e1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.497484 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.518777 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9jqb\" (UniqueName: \"kubernetes.io/projected/b042c881-3ca3-44d1-916a-1ed4205b66e1-kube-api-access-d9jqb\") pod \"glance-operator-controller-manager-8667fbf6f6-85pdk\" (UID: \"b042c881-3ca3-44d1-916a-1ed4205b66e1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.519401 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.521595 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-d5zl7" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.525086 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jtcp\" (UniqueName: \"kubernetes.io/projected/76d87e89-eead-45c1-89b0-053b0e595751-kube-api-access-7jtcp\") pod \"designate-operator-controller-manager-56dfb6b67f-fr59d\" (UID: \"76d87e89-eead-45c1-89b0-053b0e595751\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.525402 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.530145 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltnwj\" (UniqueName: \"kubernetes.io/projected/8ef3e26b-808d-455b-a88e-1fc7d5f81fc3-kube-api-access-ltnwj\") pod \"heat-operator-controller-manager-bf4c6585d-6cwkk\" (UID: \"8ef3e26b-808d-455b-a88e-1fc7d5f81fc3\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.539719 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.559816 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.564427 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.565870 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.568385 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pft55" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.587321 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.596856 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.597730 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.597765 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc4sq\" (UniqueName: \"kubernetes.io/projected/2390b681-a671-4a61-a36d-6ec38f13f97f-kube-api-access-bc4sq\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.597838 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx2fm\" (UniqueName: \"kubernetes.io/projected/34fd9714-3561-4cc7-9713-9f2788bf5ee4-kube-api-access-zx2fm\") pod \"keystone-operator-controller-manager-7879fb76fd-r9zdz\" (UID: \"34fd9714-3561-4cc7-9713-9f2788bf5ee4\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.598007 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8np2k\" (UniqueName: \"kubernetes.io/projected/400d2d41-03cf-4d6d-966b-c1676ec373d6-kube-api-access-8np2k\") pod \"horizon-operator-controller-manager-5d86b44686-hscqp\" (UID: \"400d2d41-03cf-4d6d-966b-c1676ec373d6\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.598342 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.602812 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rq2ht" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.610453 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.613150 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.632355 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.634825 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.636462 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.638631 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8np2k\" (UniqueName: \"kubernetes.io/projected/400d2d41-03cf-4d6d-966b-c1676ec373d6-kube-api-access-8np2k\") pod \"horizon-operator-controller-manager-5d86b44686-hscqp\" (UID: \"400d2d41-03cf-4d6d-966b-c1676ec373d6\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.641199 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wbf7n" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.646050 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.662925 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.663927 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.668257 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.670111 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.672396 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r8v8g" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.676035 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.689910 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.691035 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.693188 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vgsmd" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.694554 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-86q68"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.695689 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.697060 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-g6l8f" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698029 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698745 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx2fm\" (UniqueName: \"kubernetes.io/projected/34fd9714-3561-4cc7-9713-9f2788bf5ee4-kube-api-access-zx2fm\") pod \"keystone-operator-controller-manager-7879fb76fd-r9zdz\" (UID: \"34fd9714-3561-4cc7-9713-9f2788bf5ee4\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698790 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74mmw\" (UniqueName: \"kubernetes.io/projected/0755ad7e-aa96-4555-a4d8-dffb11e45807-kube-api-access-74mmw\") pod \"manila-operator-controller-manager-7bb88cb858-f2nqb\" (UID: \"0755ad7e-aa96-4555-a4d8-dffb11e45807\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698821 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4wr\" (UniqueName: \"kubernetes.io/projected/9ef25d42-bb38-4a2e-9a7b-a83dd0e30344-kube-api-access-8g4wr\") pod \"ironic-operator-controller-manager-5c75d7c94b-tl2zc\" (UID: \"9ef25d42-bb38-4a2e-9a7b-a83dd0e30344\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698867 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc4sq\" (UniqueName: \"kubernetes.io/projected/2390b681-a671-4a61-a36d-6ec38f13f97f-kube-api-access-bc4sq\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.698934 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zkvr\" (UniqueName: \"kubernetes.io/projected/c2127314-0ad9-46fe-946e-a738b8bdcd12-kube-api-access-7zkvr\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-8qf7p\" (UID: \"c2127314-0ad9-46fe-946e-a738b8bdcd12\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:16 crc kubenswrapper[5028]: E1123 07:07:16.699302 5028 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 07:07:16 crc kubenswrapper[5028]: E1123 07:07:16.699343 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert podName:2390b681-a671-4a61-a36d-6ec38f13f97f nodeName:}" failed. No retries permitted until 2025-11-23 07:07:17.19932994 +0000 UTC m=+1020.896734719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert") pod "infra-operator-controller-manager-769d9c7585-gphz8" (UID: "2390b681-a671-4a61-a36d-6ec38f13f97f") : secret "infra-operator-webhook-server-cert" not found Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.702219 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-86q68"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.717717 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.719235 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.723385 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-b6bvc" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.733201 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.738670 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc4sq\" (UniqueName: \"kubernetes.io/projected/2390b681-a671-4a61-a36d-6ec38f13f97f-kube-api-access-bc4sq\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.748137 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx2fm\" (UniqueName: \"kubernetes.io/projected/34fd9714-3561-4cc7-9713-9f2788bf5ee4-kube-api-access-zx2fm\") pod \"keystone-operator-controller-manager-7879fb76fd-r9zdz\" (UID: \"34fd9714-3561-4cc7-9713-9f2788bf5ee4\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.753383 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.754795 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.759395 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.760045 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jch7w" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.767567 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.768168 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.772030 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.779830 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.781987 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.782937 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fqqm8" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.783665 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.788885 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.791358 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4cxnm" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800681 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5hl\" (UniqueName: \"kubernetes.io/projected/3f4726b9-823e-4abf-b301-6c020b882874-kube-api-access-4f5hl\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800736 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlq9\" (UniqueName: \"kubernetes.io/projected/617b9fb7-df28-4230-a26f-41fd18a75cd7-kube-api-access-4jlq9\") pod \"neutron-operator-controller-manager-66b7d6f598-g4fnt\" (UID: \"617b9fb7-df28-4230-a26f-41fd18a75cd7\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800779 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74mmw\" (UniqueName: \"kubernetes.io/projected/0755ad7e-aa96-4555-a4d8-dffb11e45807-kube-api-access-74mmw\") pod \"manila-operator-controller-manager-7bb88cb858-f2nqb\" (UID: \"0755ad7e-aa96-4555-a4d8-dffb11e45807\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800807 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g4wr\" (UniqueName: \"kubernetes.io/projected/9ef25d42-bb38-4a2e-9a7b-a83dd0e30344-kube-api-access-8g4wr\") pod \"ironic-operator-controller-manager-5c75d7c94b-tl2zc\" (UID: \"9ef25d42-bb38-4a2e-9a7b-a83dd0e30344\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800876 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zkvr\" (UniqueName: \"kubernetes.io/projected/c2127314-0ad9-46fe-946e-a738b8bdcd12-kube-api-access-7zkvr\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-8qf7p\" (UID: \"c2127314-0ad9-46fe-946e-a738b8bdcd12\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800902 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800921 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-778tx\" (UniqueName: \"kubernetes.io/projected/b31fd080-61d4-4dea-a594-932fbbccf98b-kube-api-access-778tx\") pod \"octavia-operator-controller-manager-6fdc856c5d-nbtch\" (UID: \"b31fd080-61d4-4dea-a594-932fbbccf98b\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800939 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8d8p\" (UniqueName: \"kubernetes.io/projected/65571797-5661-45f7-8ec9-b87dbe97a10a-kube-api-access-n8d8p\") pod \"nova-operator-controller-manager-86d796d84d-86q68\" (UID: \"65571797-5661-45f7-8ec9-b87dbe97a10a\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.800982 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxk2\" (UniqueName: \"kubernetes.io/projected/00ecba7a-6f06-4513-9c6b-239606cc6462-kube-api-access-7dxk2\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-jtt25\" (UID: \"00ecba7a-6f06-4513-9c6b-239606cc6462\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.807018 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.812573 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mxw2x" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.826857 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.840111 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.847871 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74mmw\" (UniqueName: \"kubernetes.io/projected/0755ad7e-aa96-4555-a4d8-dffb11e45807-kube-api-access-74mmw\") pod \"manila-operator-controller-manager-7bb88cb858-f2nqb\" (UID: \"0755ad7e-aa96-4555-a4d8-dffb11e45807\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.850117 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zkvr\" (UniqueName: \"kubernetes.io/projected/c2127314-0ad9-46fe-946e-a738b8bdcd12-kube-api-access-7zkvr\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-8qf7p\" (UID: \"c2127314-0ad9-46fe-946e-a738b8bdcd12\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.875661 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g4wr\" (UniqueName: \"kubernetes.io/projected/9ef25d42-bb38-4a2e-9a7b-a83dd0e30344-kube-api-access-8g4wr\") pod \"ironic-operator-controller-manager-5c75d7c94b-tl2zc\" (UID: \"9ef25d42-bb38-4a2e-9a7b-a83dd0e30344\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.879928 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.903938 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlq9\" (UniqueName: \"kubernetes.io/projected/617b9fb7-df28-4230-a26f-41fd18a75cd7-kube-api-access-4jlq9\") pod \"neutron-operator-controller-manager-66b7d6f598-g4fnt\" (UID: \"617b9fb7-df28-4230-a26f-41fd18a75cd7\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904235 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gktn\" (UniqueName: \"kubernetes.io/projected/10bd2367-5a93-4e29-8242-737023dd21a5-kube-api-access-6gktn\") pod \"placement-operator-controller-manager-6dc664666c-67p4j\" (UID: \"10bd2367-5a93-4e29-8242-737023dd21a5\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904406 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904499 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-778tx\" (UniqueName: \"kubernetes.io/projected/b31fd080-61d4-4dea-a594-932fbbccf98b-kube-api-access-778tx\") pod \"octavia-operator-controller-manager-6fdc856c5d-nbtch\" (UID: \"b31fd080-61d4-4dea-a594-932fbbccf98b\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904609 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czd7p\" (UniqueName: \"kubernetes.io/projected/681da997-6aae-43ee-9b25-3307858c63c3-kube-api-access-czd7p\") pod \"telemetry-operator-controller-manager-7798859c74-nfwpv\" (UID: \"681da997-6aae-43ee-9b25-3307858c63c3\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904720 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8d8p\" (UniqueName: \"kubernetes.io/projected/65571797-5661-45f7-8ec9-b87dbe97a10a-kube-api-access-n8d8p\") pod \"nova-operator-controller-manager-86d796d84d-86q68\" (UID: \"65571797-5661-45f7-8ec9-b87dbe97a10a\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904824 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxk2\" (UniqueName: \"kubernetes.io/projected/00ecba7a-6f06-4513-9c6b-239606cc6462-kube-api-access-7dxk2\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-jtt25\" (UID: \"00ecba7a-6f06-4513-9c6b-239606cc6462\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.904930 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhz2k\" (UniqueName: \"kubernetes.io/projected/ac695e8d-ade9-44ac-8737-df03a6c712b8-kube-api-access-jhz2k\") pod \"swift-operator-controller-manager-799cb6ffd6-bczft\" (UID: \"ac695e8d-ade9-44ac-8737-df03a6c712b8\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.905175 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f5hl\" (UniqueName: \"kubernetes.io/projected/3f4726b9-823e-4abf-b301-6c020b882874-kube-api-access-4f5hl\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:16 crc kubenswrapper[5028]: E1123 07:07:16.910888 5028 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:07:16 crc kubenswrapper[5028]: E1123 07:07:16.911404 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert podName:3f4726b9-823e-4abf-b301-6c020b882874 nodeName:}" failed. No retries permitted until 2025-11-23 07:07:17.411375656 +0000 UTC m=+1021.108780435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" (UID: "3f4726b9-823e-4abf-b301-6c020b882874") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.949286 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-vwblx"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.954312 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.955725 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-vwblx"] Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.956109 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.970192 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-w6bnn" Nov 23 07:07:16 crc kubenswrapper[5028]: I1123 07:07:16.990515 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxk2\" (UniqueName: \"kubernetes.io/projected/00ecba7a-6f06-4513-9c6b-239606cc6462-kube-api-access-7dxk2\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-jtt25\" (UID: \"00ecba7a-6f06-4513-9c6b-239606cc6462\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:16.998531 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pft55" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.010651 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gktn\" (UniqueName: \"kubernetes.io/projected/10bd2367-5a93-4e29-8242-737023dd21a5-kube-api-access-6gktn\") pod \"placement-operator-controller-manager-6dc664666c-67p4j\" (UID: \"10bd2367-5a93-4e29-8242-737023dd21a5\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.010751 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czd7p\" (UniqueName: \"kubernetes.io/projected/681da997-6aae-43ee-9b25-3307858c63c3-kube-api-access-czd7p\") pod \"telemetry-operator-controller-manager-7798859c74-nfwpv\" (UID: \"681da997-6aae-43ee-9b25-3307858c63c3\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.010799 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhz2k\" (UniqueName: \"kubernetes.io/projected/ac695e8d-ade9-44ac-8737-df03a6c712b8-kube-api-access-jhz2k\") pod \"swift-operator-controller-manager-799cb6ffd6-bczft\" (UID: \"ac695e8d-ade9-44ac-8737-df03a6c712b8\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.010874 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snql2\" (UniqueName: \"kubernetes.io/projected/8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa-kube-api-access-snql2\") pod \"test-operator-controller-manager-8464cf66df-vwblx\" (UID: \"8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.011687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.012281 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f5hl\" (UniqueName: \"kubernetes.io/projected/3f4726b9-823e-4abf-b301-6c020b882874-kube-api-access-4f5hl\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.017363 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8d8p\" (UniqueName: \"kubernetes.io/projected/65571797-5661-45f7-8ec9-b87dbe97a10a-kube-api-access-n8d8p\") pod \"nova-operator-controller-manager-86d796d84d-86q68\" (UID: \"65571797-5661-45f7-8ec9-b87dbe97a10a\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.018308 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rq2ht" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.019741 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.028494 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-778tx\" (UniqueName: \"kubernetes.io/projected/b31fd080-61d4-4dea-a594-932fbbccf98b-kube-api-access-778tx\") pod \"octavia-operator-controller-manager-6fdc856c5d-nbtch\" (UID: \"b31fd080-61d4-4dea-a594-932fbbccf98b\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.036655 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlq9\" (UniqueName: \"kubernetes.io/projected/617b9fb7-df28-4230-a26f-41fd18a75cd7-kube-api-access-4jlq9\") pod \"neutron-operator-controller-manager-66b7d6f598-g4fnt\" (UID: \"617b9fb7-df28-4230-a26f-41fd18a75cd7\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.069254 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wbf7n" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.073835 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhz2k\" (UniqueName: \"kubernetes.io/projected/ac695e8d-ade9-44ac-8737-df03a6c712b8-kube-api-access-jhz2k\") pod \"swift-operator-controller-manager-799cb6ffd6-bczft\" (UID: \"ac695e8d-ade9-44ac-8737-df03a6c712b8\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.078938 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.089338 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r8v8g" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.095532 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.112479 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snql2\" (UniqueName: \"kubernetes.io/projected/8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa-kube-api-access-snql2\") pod \"test-operator-controller-manager-8464cf66df-vwblx\" (UID: \"8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.125174 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vgsmd" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.129933 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.131635 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.131730 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.133866 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.134737 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7gdk2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.137176 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gktn\" (UniqueName: \"kubernetes.io/projected/10bd2367-5a93-4e29-8242-737023dd21a5-kube-api-access-6gktn\") pod \"placement-operator-controller-manager-6dc664666c-67p4j\" (UID: \"10bd2367-5a93-4e29-8242-737023dd21a5\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.145373 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.146968 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.148427 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.151564 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9wmjm" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.152125 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czd7p\" (UniqueName: \"kubernetes.io/projected/681da997-6aae-43ee-9b25-3307858c63c3-kube-api-access-czd7p\") pod \"telemetry-operator-controller-manager-7798859c74-nfwpv\" (UID: \"681da997-6aae-43ee-9b25-3307858c63c3\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.156661 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.161058 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.161859 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.164197 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-b6bvc" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.165124 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-g6l8f" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.167029 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.167940 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tthz5" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.168183 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.168251 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.169767 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snql2\" (UniqueName: \"kubernetes.io/projected/8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa-kube-api-access-snql2\") pod \"test-operator-controller-manager-8464cf66df-vwblx\" (UID: \"8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.215359 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhq2v\" (UniqueName: \"kubernetes.io/projected/298c6ef1-dee2-4a62-a228-aa55fcbfa1b6-kube-api-access-xhq2v\") pod \"watcher-operator-controller-manager-7cd4fb6f79-mjvvx\" (UID: \"298c6ef1-dee2-4a62-a228-aa55fcbfa1b6\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.215415 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ade56f18-e2f8-447d-8ecd-4d396affff9b-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.215468 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xj4\" (UniqueName: \"kubernetes.io/projected/ade56f18-e2f8-447d-8ecd-4d396affff9b-kube-api-access-75xj4\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.215503 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szl6z\" (UniqueName: \"kubernetes.io/projected/ad5e138f-6210-4ef8-be27-d2b93c56b241-kube-api-access-szl6z\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz\" (UID: \"ad5e138f-6210-4ef8-be27-d2b93c56b241\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.215548 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:17 crc kubenswrapper[5028]: E1123 07:07:17.215649 5028 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 23 07:07:17 crc kubenswrapper[5028]: E1123 07:07:17.215693 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert podName:2390b681-a671-4a61-a36d-6ec38f13f97f nodeName:}" failed. No retries permitted until 2025-11-23 07:07:18.215680201 +0000 UTC m=+1021.913084980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert") pod "infra-operator-controller-manager-769d9c7585-gphz8" (UID: "2390b681-a671-4a61-a36d-6ec38f13f97f") : secret "infra-operator-webhook-server-cert" not found Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.219883 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mxw2x" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.220141 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-fqqm8" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.226140 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.226525 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.256237 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4cxnm" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.272619 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.274738 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-w6bnn" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.283651 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.320230 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhq2v\" (UniqueName: \"kubernetes.io/projected/298c6ef1-dee2-4a62-a228-aa55fcbfa1b6-kube-api-access-xhq2v\") pod \"watcher-operator-controller-manager-7cd4fb6f79-mjvvx\" (UID: \"298c6ef1-dee2-4a62-a228-aa55fcbfa1b6\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.320290 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ade56f18-e2f8-447d-8ecd-4d396affff9b-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.320372 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xj4\" (UniqueName: \"kubernetes.io/projected/ade56f18-e2f8-447d-8ecd-4d396affff9b-kube-api-access-75xj4\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.320395 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szl6z\" (UniqueName: \"kubernetes.io/projected/ad5e138f-6210-4ef8-be27-d2b93c56b241-kube-api-access-szl6z\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz\" (UID: \"ad5e138f-6210-4ef8-be27-d2b93c56b241\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" Nov 23 07:07:17 crc kubenswrapper[5028]: E1123 07:07:17.321450 5028 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 23 07:07:17 crc kubenswrapper[5028]: E1123 07:07:17.321503 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ade56f18-e2f8-447d-8ecd-4d396affff9b-cert podName:ade56f18-e2f8-447d-8ecd-4d396affff9b nodeName:}" failed. No retries permitted until 2025-11-23 07:07:17.821481998 +0000 UTC m=+1021.518886857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ade56f18-e2f8-447d-8ecd-4d396affff9b-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-nllf2" (UID: "ade56f18-e2f8-447d-8ecd-4d396affff9b") : secret "webhook-server-cert" not found Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.338834 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhq2v\" (UniqueName: \"kubernetes.io/projected/298c6ef1-dee2-4a62-a228-aa55fcbfa1b6-kube-api-access-xhq2v\") pod \"watcher-operator-controller-manager-7cd4fb6f79-mjvvx\" (UID: \"298c6ef1-dee2-4a62-a228-aa55fcbfa1b6\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.341337 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xj4\" (UniqueName: \"kubernetes.io/projected/ade56f18-e2f8-447d-8ecd-4d396affff9b-kube-api-access-75xj4\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.343237 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szl6z\" (UniqueName: \"kubernetes.io/projected/ad5e138f-6210-4ef8-be27-d2b93c56b241-kube-api-access-szl6z\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz\" (UID: \"ad5e138f-6210-4ef8-be27-d2b93c56b241\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.361715 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.423511 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:17 crc kubenswrapper[5028]: E1123 07:07:17.423700 5028 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:07:17 crc kubenswrapper[5028]: E1123 07:07:17.423748 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert podName:3f4726b9-823e-4abf-b301-6c020b882874 nodeName:}" failed. No retries permitted until 2025-11-23 07:07:18.423734679 +0000 UTC m=+1022.121139448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" (UID: "3f4726b9-823e-4abf-b301-6c020b882874") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.604468 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.822047 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d"] Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.833748 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ade56f18-e2f8-447d-8ecd-4d396affff9b-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.856850 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ade56f18-e2f8-447d-8ecd-4d396affff9b-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-nllf2\" (UID: \"ade56f18-e2f8-447d-8ecd-4d396affff9b\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.918186 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:17 crc kubenswrapper[5028]: I1123 07:07:17.996311 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.131535 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx"] Nov 23 07:07:18 crc kubenswrapper[5028]: W1123 07:07:18.139294 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4535624_ea6d_4e72_be76_c37915bcfe54.slice/crio-c0311af92530b593f91aa973a114e6202a4f785f924824d2c1eb339953ac9b1a WatchSource:0}: Error finding container c0311af92530b593f91aa973a114e6202a4f785f924824d2c1eb339953ac9b1a: Status 404 returned error can't find the container with id c0311af92530b593f91aa973a114e6202a4f785f924824d2c1eb339953ac9b1a Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.255585 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.263743 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2390b681-a671-4a61-a36d-6ec38f13f97f-cert\") pod \"infra-operator-controller-manager-769d9c7585-gphz8\" (UID: \"2390b681-a671-4a61-a36d-6ec38f13f97f\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.264517 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.325043 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-wg6mc" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.333835 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.461190 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.464847 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f4726b9-823e-4abf-b301-6c020b882874-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd445pzzk\" (UID: \"3f4726b9-823e-4abf-b301-6c020b882874\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.564095 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.598326 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.637771 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.649357 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" event={"ID":"76d87e89-eead-45c1-89b0-053b0e595751","Type":"ContainerStarted","Data":"0142b47f757264ffdd5b2c3b8af6003eb1ce8071a7b9dbad7f0accb41f62fc57"} Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.652290 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" event={"ID":"cfecab10-1421-49e5-9a36-f14bc9a61340","Type":"ContainerStarted","Data":"536599627bbb692779f659c2218f8f1f88c9e5c3e2f85e61abc8bcb6470f23e3"} Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.667616 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.674239 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" event={"ID":"b042c881-3ca3-44d1-916a-1ed4205b66e1","Type":"ContainerStarted","Data":"fa72855676144b168e96992d42df7e82fa28309c2b4fe1d5c5933932a7354d0f"} Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.676271 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" event={"ID":"b4535624-ea6d-4e72-be76-c37915bcfe54","Type":"ContainerStarted","Data":"c0311af92530b593f91aa973a114e6202a4f785f924824d2c1eb339953ac9b1a"} Nov 23 07:07:18 crc kubenswrapper[5028]: W1123 07:07:18.681457 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac695e8d_ade9_44ac_8737_df03a6c712b8.slice/crio-b161e7a6dd96eee8466a604d7e9661c42044f54f9c481dae59bd230418f4a8ed WatchSource:0}: Error finding container b161e7a6dd96eee8466a604d7e9661c42044f54f9c481dae59bd230418f4a8ed: Status 404 returned error can't find the container with id b161e7a6dd96eee8466a604d7e9661c42044f54f9c481dae59bd230418f4a8ed Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.681885 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" event={"ID":"617b9fb7-df28-4230-a26f-41fd18a75cd7","Type":"ContainerStarted","Data":"fba0eb44ee26fd88a96e1093d08178c525f3ff30f08d18fe2c9a628981e38568"} Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.683375 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jch7w" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.686687 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" event={"ID":"34fd9714-3561-4cc7-9713-9f2788bf5ee4","Type":"ContainerStarted","Data":"72604dd43d5c6b0e08f88321dc8be18c1725d411d22d42a40243abcc33ac7edc"} Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.693322 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.714140 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.726715 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.755535 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-86q68"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.761724 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.766122 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk"] Nov 23 07:07:18 crc kubenswrapper[5028]: W1123 07:07:18.767871 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400d2d41_03cf_4d6d_966b_c1676ec373d6.slice/crio-05ccdf97ae742837db80780db8d70041a3ac45b1fa73a4a207f33fb70a34ee81 WatchSource:0}: Error finding container 05ccdf97ae742837db80780db8d70041a3ac45b1fa73a4a207f33fb70a34ee81: Status 404 returned error can't find the container with id 05ccdf97ae742837db80780db8d70041a3ac45b1fa73a4a207f33fb70a34ee81 Nov 23 07:07:18 crc kubenswrapper[5028]: E1123 07:07:18.768755 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zkvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6f8c5b86cb-8qf7p_openstack-operators(c2127314-0ad9-46fe-946e-a738b8bdcd12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:07:18 crc kubenswrapper[5028]: E1123 07:07:18.769933 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8np2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5d86b44686-hscqp_openstack-operators(400d2d41-03cf-4d6d-966b-c1676ec373d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.770265 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p"] Nov 23 07:07:18 crc kubenswrapper[5028]: W1123 07:07:18.775082 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f4a06fe_3a4a_430c_8c2f_d5e81f8243fa.slice/crio-225e88442e24d18d53e7394b36a1863eba7a36fd7335d0e00fd3f57a50b6abd4 WatchSource:0}: Error finding container 225e88442e24d18d53e7394b36a1863eba7a36fd7335d0e00fd3f57a50b6abd4: Status 404 returned error can't find the container with id 225e88442e24d18d53e7394b36a1863eba7a36fd7335d0e00fd3f57a50b6abd4 Nov 23 07:07:18 crc kubenswrapper[5028]: W1123 07:07:18.776136 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10bd2367_5a93_4e29_8242_737023dd21a5.slice/crio-ee0715c49a13772e2109b966ce1e681001d38e173e8e19493e23f2f317fdf06d WatchSource:0}: Error finding container ee0715c49a13772e2109b966ce1e681001d38e173e8e19493e23f2f317fdf06d: Status 404 returned error can't find the container with id ee0715c49a13772e2109b966ce1e681001d38e173e8e19493e23f2f317fdf06d Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.781787 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.788605 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp"] Nov 23 07:07:18 crc kubenswrapper[5028]: E1123 07:07:18.792335 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-snql2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8464cf66df-vwblx_openstack-operators(8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.793250 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv"] Nov 23 07:07:18 crc kubenswrapper[5028]: E1123 07:07:18.799045 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhq2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-mjvvx_openstack-operators(298c6ef1-dee2-4a62-a228-aa55fcbfa1b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.801363 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx"] Nov 23 07:07:18 crc kubenswrapper[5028]: W1123 07:07:18.801593 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681da997_6aae_43ee_9b25_3307858c63c3.slice/crio-b99d36ca21b5776e96404ba506692c85cb7f9032d8c758abf47115d12e5c7a61 WatchSource:0}: Error finding container b99d36ca21b5776e96404ba506692c85cb7f9032d8c758abf47115d12e5c7a61: Status 404 returned error can't find the container with id b99d36ca21b5776e96404ba506692c85cb7f9032d8c758abf47115d12e5c7a61 Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.807811 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.828297 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-vwblx"] Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.830724 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25"] Nov 23 07:07:18 crc kubenswrapper[5028]: E1123 07:07:18.838405 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dxk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5bdf4f7f7f-jtt25_openstack-operators(00ecba7a-6f06-4513-9c6b-239606cc6462): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:07:18 crc kubenswrapper[5028]: E1123 07:07:18.839103 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czd7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7798859c74-nfwpv_openstack-operators(681da997-6aae-43ee-9b25-3307858c63c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 23 07:07:18 crc kubenswrapper[5028]: I1123 07:07:18.967900 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8"] Nov 23 07:07:19 crc kubenswrapper[5028]: W1123 07:07:19.005563 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2390b681_a671_4a61_a36d_6ec38f13f97f.slice/crio-d892ac6cd6fc15e209bebd89d4abaa4db8b5a00dc46732b221a36b12120bcccb WatchSource:0}: Error finding container d892ac6cd6fc15e209bebd89d4abaa4db8b5a00dc46732b221a36b12120bcccb: Status 404 returned error can't find the container with id d892ac6cd6fc15e209bebd89d4abaa4db8b5a00dc46732b221a36b12120bcccb Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.047345 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" podUID="c2127314-0ad9-46fe-946e-a738b8bdcd12" Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.166307 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" podUID="8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa" Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.267849 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" podUID="400d2d41-03cf-4d6d-966b-c1676ec373d6" Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.334654 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" podUID="681da997-6aae-43ee-9b25-3307858c63c3" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.362314 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk"] Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.464303 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" podUID="00ecba7a-6f06-4513-9c6b-239606cc6462" Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.536369 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" podUID="298c6ef1-dee2-4a62-a228-aa55fcbfa1b6" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.698713 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" event={"ID":"681da997-6aae-43ee-9b25-3307858c63c3","Type":"ContainerStarted","Data":"b45ea2e376a14f06925439326e53422bb2a6b9cce1bc1489635b988b5c189938"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.698778 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" event={"ID":"681da997-6aae-43ee-9b25-3307858c63c3","Type":"ContainerStarted","Data":"b99d36ca21b5776e96404ba506692c85cb7f9032d8c758abf47115d12e5c7a61"} Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.701105 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" podUID="681da997-6aae-43ee-9b25-3307858c63c3" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.702486 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" event={"ID":"ac695e8d-ade9-44ac-8737-df03a6c712b8","Type":"ContainerStarted","Data":"b161e7a6dd96eee8466a604d7e9661c42044f54f9c481dae59bd230418f4a8ed"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.710543 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" event={"ID":"b31fd080-61d4-4dea-a594-932fbbccf98b","Type":"ContainerStarted","Data":"6bbe4fe8cdbfa8ec3362a84bc5e238ba669a35227085e07f8de036035e485326"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.717704 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" event={"ID":"c2127314-0ad9-46fe-946e-a738b8bdcd12","Type":"ContainerStarted","Data":"e0e19b2c6b99d43af8a579aa19431a48784ad87b40590eaff82944cd63a6c211"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.717750 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" event={"ID":"c2127314-0ad9-46fe-946e-a738b8bdcd12","Type":"ContainerStarted","Data":"a7e207e004672c3f08a25fd6bc3d4a70a458ec79c4e3bf3f4bbecd8901c20bfc"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.719166 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" event={"ID":"9ef25d42-bb38-4a2e-9a7b-a83dd0e30344","Type":"ContainerStarted","Data":"156991157f6022eb0c02c7c162c35c2b028557d128460878697fac7db869c8f9"} Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.725331 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" podUID="c2127314-0ad9-46fe-946e-a738b8bdcd12" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.725370 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" event={"ID":"8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa","Type":"ContainerStarted","Data":"67509b61c91eb4ce0123d2c4d48b90636f440a80c4a88a31e4f5dbc86934fbb5"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.725401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" event={"ID":"8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa","Type":"ContainerStarted","Data":"225e88442e24d18d53e7394b36a1863eba7a36fd7335d0e00fd3f57a50b6abd4"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.732325 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" event={"ID":"65571797-5661-45f7-8ec9-b87dbe97a10a","Type":"ContainerStarted","Data":"680152e964bd24b61125e996e8c3f6d1506255129092febf944b910fc943b3ab"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.733714 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" event={"ID":"ad5e138f-6210-4ef8-be27-d2b93c56b241","Type":"ContainerStarted","Data":"340df4c75f4a47dd577a203d229239a560824b2b39e8cc4db483af8f765e3648"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.735136 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" event={"ID":"2390b681-a671-4a61-a36d-6ec38f13f97f","Type":"ContainerStarted","Data":"d892ac6cd6fc15e209bebd89d4abaa4db8b5a00dc46732b221a36b12120bcccb"} Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.736193 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" podUID="8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.744473 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" event={"ID":"00ecba7a-6f06-4513-9c6b-239606cc6462","Type":"ContainerStarted","Data":"5b71d5bf237183a4117e894a17c2ab128ccff514d9852fb7b6eaa79007b42098"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.744552 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" event={"ID":"00ecba7a-6f06-4513-9c6b-239606cc6462","Type":"ContainerStarted","Data":"a8852f7ef01971d48b3f3a96f84b65ec3003b64cb19b6cc10919866956701b06"} Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.746442 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" podUID="00ecba7a-6f06-4513-9c6b-239606cc6462" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.748545 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" event={"ID":"ade56f18-e2f8-447d-8ecd-4d396affff9b","Type":"ContainerStarted","Data":"2a3ef1a7245d9e7fb64527eec5a7936a1a3803cfad45b204870fade7ebf49c12"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.748576 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" event={"ID":"ade56f18-e2f8-447d-8ecd-4d396affff9b","Type":"ContainerStarted","Data":"eb6e36ecd8ce81a6f97bc8c0f07005ca88a2f6ef5ad4d18ca1a106ff7bad2d8d"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.748587 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" event={"ID":"ade56f18-e2f8-447d-8ecd-4d396affff9b","Type":"ContainerStarted","Data":"3eddfa54952d3bef2136253c95cd22bc86e58af337243bae1fd00fbf73811ab8"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.748971 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.797020 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" event={"ID":"8ef3e26b-808d-455b-a88e-1fc7d5f81fc3","Type":"ContainerStarted","Data":"4641a79b06e58821f97c4e9bd1e2eb6a36fe4596745b13b3b5a7e437adc4f522"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.809348 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" podStartSLOduration=3.80933264 podStartE2EDuration="3.80933264s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:07:19.808844638 +0000 UTC m=+1023.506249427" watchObservedRunningTime="2025-11-23 07:07:19.80933264 +0000 UTC m=+1023.506737419" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.813141 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" event={"ID":"0755ad7e-aa96-4555-a4d8-dffb11e45807","Type":"ContainerStarted","Data":"c9fea79ea2162ab95c09648d3b31f6ec5d32acab9b80e5708e0be37e067d49e2"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.819160 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" event={"ID":"298c6ef1-dee2-4a62-a228-aa55fcbfa1b6","Type":"ContainerStarted","Data":"4f254c24cf7d9674aa9ebbee9a2fb792d89645775e9dd984028a78f5e4ae2fe8"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.819195 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" event={"ID":"298c6ef1-dee2-4a62-a228-aa55fcbfa1b6","Type":"ContainerStarted","Data":"76b33a7fb356b49c44f63df966e96de139750b6d5a0c5289ec6ee52bddfce1b8"} Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.821717 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" podUID="298c6ef1-dee2-4a62-a228-aa55fcbfa1b6" Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.832188 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" event={"ID":"3f4726b9-823e-4abf-b301-6c020b882874","Type":"ContainerStarted","Data":"32356780e722cfc110dd43bdc4266758d8e9d399578c5c8242d994a8988fccab"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.837405 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" event={"ID":"10bd2367-5a93-4e29-8242-737023dd21a5","Type":"ContainerStarted","Data":"ee0715c49a13772e2109b966ce1e681001d38e173e8e19493e23f2f317fdf06d"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.839419 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" event={"ID":"400d2d41-03cf-4d6d-966b-c1676ec373d6","Type":"ContainerStarted","Data":"4024d73f0885c44abfc2b675ac5f1e844ff41e89145a3bc945cd00f0ca282820"} Nov 23 07:07:19 crc kubenswrapper[5028]: I1123 07:07:19.839463 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" event={"ID":"400d2d41-03cf-4d6d-966b-c1676ec373d6","Type":"ContainerStarted","Data":"05ccdf97ae742837db80780db8d70041a3ac45b1fa73a4a207f33fb70a34ee81"} Nov 23 07:07:19 crc kubenswrapper[5028]: E1123 07:07:19.841619 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" podUID="400d2d41-03cf-4d6d-966b-c1676ec373d6" Nov 23 07:07:20 crc kubenswrapper[5028]: E1123 07:07:20.853476 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" podUID="8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa" Nov 23 07:07:20 crc kubenswrapper[5028]: E1123 07:07:20.854244 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" podUID="00ecba7a-6f06-4513-9c6b-239606cc6462" Nov 23 07:07:20 crc kubenswrapper[5028]: E1123 07:07:20.854519 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" podUID="298c6ef1-dee2-4a62-a228-aa55fcbfa1b6" Nov 23 07:07:20 crc kubenswrapper[5028]: E1123 07:07:20.854558 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" podUID="400d2d41-03cf-4d6d-966b-c1676ec373d6" Nov 23 07:07:20 crc kubenswrapper[5028]: E1123 07:07:20.854598 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" podUID="c2127314-0ad9-46fe-946e-a738b8bdcd12" Nov 23 07:07:20 crc kubenswrapper[5028]: E1123 07:07:20.861457 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" podUID="681da997-6aae-43ee-9b25-3307858c63c3" Nov 23 07:07:27 crc kubenswrapper[5028]: I1123 07:07:27.924975 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-nllf2" Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.937766 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" event={"ID":"cfecab10-1421-49e5-9a36-f14bc9a61340","Type":"ContainerStarted","Data":"d0b142aafd5dcf0a01a5f1d5497ab073d64f4fd5be3b0fe3ebf8b9782be98d70"} Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.939788 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" event={"ID":"cfecab10-1421-49e5-9a36-f14bc9a61340","Type":"ContainerStarted","Data":"cefe34058c3bdb981f75538c769c82e5769d25355721fd5308ea7a5cff7e401f"} Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.939816 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.941114 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" event={"ID":"9ef25d42-bb38-4a2e-9a7b-a83dd0e30344","Type":"ContainerStarted","Data":"e2adfaee67fff30d58f389be482350322a42609fc2d0c7aff8a98884095431d5"} Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.947248 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.947312 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.954183 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" event={"ID":"34fd9714-3561-4cc7-9713-9f2788bf5ee4","Type":"ContainerStarted","Data":"1918cee2171a66ed7a6f17bacc83bed4575d63cc138ca39ffcb2622601bf81f9"} Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.975933 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" event={"ID":"65571797-5661-45f7-8ec9-b87dbe97a10a","Type":"ContainerStarted","Data":"ca79563ba3e49d30a5ef9dfc9e73c06d8432381094219b898f14ae9766602690"} Nov 23 07:07:30 crc kubenswrapper[5028]: I1123 07:07:30.986188 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" event={"ID":"0755ad7e-aa96-4555-a4d8-dffb11e45807","Type":"ContainerStarted","Data":"db8d6a96ad7f4b7e9d529a0d07468c868498c237a2eeb42e5be553e5bb459936"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.009591 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" event={"ID":"b042c881-3ca3-44d1-916a-1ed4205b66e1","Type":"ContainerStarted","Data":"75010da640b8fb233daab9ec3df6427c468fbd47cb5c4996ce6878b11f1825ac"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.033426 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" event={"ID":"b31fd080-61d4-4dea-a594-932fbbccf98b","Type":"ContainerStarted","Data":"af2fbb99458d4501b9a6ffc54cb84cf5898195c330297c1fe4f9f6449aba4614"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.035909 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" event={"ID":"10bd2367-5a93-4e29-8242-737023dd21a5","Type":"ContainerStarted","Data":"56cb5b17334d6f95074e5bde5e8b83064bb0185604aa9922052346639708e71c"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.043401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" event={"ID":"76d87e89-eead-45c1-89b0-053b0e595751","Type":"ContainerStarted","Data":"82b55c1d9b7a617b63349d52f28c823dc34cbe6240b97023617ab2a30e097734"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.082826 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" event={"ID":"3f4726b9-823e-4abf-b301-6c020b882874","Type":"ContainerStarted","Data":"7b3f7f0b9678c7a569192a6863cde5c6942976a4ec33dbc2d1c176ffa9e55ca6"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.091241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" event={"ID":"8ef3e26b-808d-455b-a88e-1fc7d5f81fc3","Type":"ContainerStarted","Data":"b8c277ec67f7f472dc3484a18110f6f1458f9773e5b8d6894e1232e1e16a2ae8"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.091286 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" event={"ID":"8ef3e26b-808d-455b-a88e-1fc7d5f81fc3","Type":"ContainerStarted","Data":"27b796a2f1f060b30b29ee1f323d8ba1bed3db8c751c3a747f969e3bd9544510"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.091572 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.121300 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" podStartSLOduration=3.384298965 podStartE2EDuration="15.121287644s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.008925749 +0000 UTC m=+1021.706330528" lastFinishedPulling="2025-11-23 07:07:29.745914428 +0000 UTC m=+1033.443319207" observedRunningTime="2025-11-23 07:07:30.972964205 +0000 UTC m=+1034.670368984" watchObservedRunningTime="2025-11-23 07:07:31.121287644 +0000 UTC m=+1034.818692423" Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.124210 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" event={"ID":"2390b681-a671-4a61-a36d-6ec38f13f97f","Type":"ContainerStarted","Data":"05c2f4e0c10274a01ad41662d92a6f58a76682903fcae88def71da78ec0e264e"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.124683 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" podStartSLOduration=4.141150263 podStartE2EDuration="15.124676357s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.764087075 +0000 UTC m=+1022.461491854" lastFinishedPulling="2025-11-23 07:07:29.747613169 +0000 UTC m=+1033.445017948" observedRunningTime="2025-11-23 07:07:31.120439724 +0000 UTC m=+1034.817844503" watchObservedRunningTime="2025-11-23 07:07:31.124676357 +0000 UTC m=+1034.822081136" Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.154323 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" event={"ID":"b4535624-ea6d-4e72-be76-c37915bcfe54","Type":"ContainerStarted","Data":"3b3b814a1e688ad1af877ea3330379ac8fdbb36ddb890d16faa6c0f355b09f5f"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.185992 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" event={"ID":"617b9fb7-df28-4230-a26f-41fd18a75cd7","Type":"ContainerStarted","Data":"72ce5723c4f4957aaf1a9f12ae39793adc4205f985fefd224308b42437bd48e8"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.197053 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" event={"ID":"ac695e8d-ade9-44ac-8737-df03a6c712b8","Type":"ContainerStarted","Data":"547528baa543088ddb108d41c77ba49d225ff7c664641f407e3fe40d09fae641"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.210158 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" event={"ID":"ad5e138f-6210-4ef8-be27-d2b93c56b241","Type":"ContainerStarted","Data":"8cc97650f4bddc6454c8291c8b8dc342662952b38b9edc2d17bd9899cdf0b69f"} Nov 23 07:07:31 crc kubenswrapper[5028]: I1123 07:07:31.243702 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz" podStartSLOduration=3.091563029 podStartE2EDuration="14.243687085s" podCreationTimestamp="2025-11-23 07:07:17 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.730301035 +0000 UTC m=+1022.427705814" lastFinishedPulling="2025-11-23 07:07:29.882425091 +0000 UTC m=+1033.579829870" observedRunningTime="2025-11-23 07:07:31.238863938 +0000 UTC m=+1034.936268717" watchObservedRunningTime="2025-11-23 07:07:31.243687085 +0000 UTC m=+1034.941091864" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.219061 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" event={"ID":"ac695e8d-ade9-44ac-8737-df03a6c712b8","Type":"ContainerStarted","Data":"5556dcfc01da35441e79b7a2c4b916e3467ab65d1f574579af171951e1d7f3f4"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.219226 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.220654 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" event={"ID":"b042c881-3ca3-44d1-916a-1ed4205b66e1","Type":"ContainerStarted","Data":"4d6d89ccde9dc6fc9dd79a6a8cabbd75f3d2e1ebf367d0509c04bd29cf7cb610"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.220769 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.222388 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" event={"ID":"3f4726b9-823e-4abf-b301-6c020b882874","Type":"ContainerStarted","Data":"1d3b7bc805c6296f2970ed9b8002a81b8b0aaf04a1f559b86a4f5fc75e0d223a"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.222558 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.224341 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" event={"ID":"617b9fb7-df28-4230-a26f-41fd18a75cd7","Type":"ContainerStarted","Data":"43002d339b0de9f6f53bb68342b639fd8893ff1cb3388dafbc82d7752c12c836"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.224471 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.226152 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" event={"ID":"65571797-5661-45f7-8ec9-b87dbe97a10a","Type":"ContainerStarted","Data":"000ed531db46b85280662c7f952c74586df7c5573781cbc5da0957c1d0ba7cb4"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.228372 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" event={"ID":"76d87e89-eead-45c1-89b0-053b0e595751","Type":"ContainerStarted","Data":"555d027f549a4a489b0c6520a5bb514b76c7b7ae0e90b3de0e02cb64594cc03f"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.228499 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.229993 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" event={"ID":"2390b681-a671-4a61-a36d-6ec38f13f97f","Type":"ContainerStarted","Data":"28d9019ff1103e8556261a5095fd70c510805c7890a6503d0db8ab1d5aa02e2c"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.230068 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.232227 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" event={"ID":"10bd2367-5a93-4e29-8242-737023dd21a5","Type":"ContainerStarted","Data":"9989836b92e825544a6fd40a999b69e6891ea71ed2ae76ab19449485a68b81c6"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.232368 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.234013 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" event={"ID":"b4535624-ea6d-4e72-be76-c37915bcfe54","Type":"ContainerStarted","Data":"78d9784c13a21ef0121ac576d4ff3a583a83dbb0d2b86a5938825b2ce6c3e438"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.234130 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.235658 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" event={"ID":"9ef25d42-bb38-4a2e-9a7b-a83dd0e30344","Type":"ContainerStarted","Data":"bccca670f1ea3e72ce8701691c41ece3489221944cc946eb471d158c23aa6b5e"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.235769 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.237379 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" event={"ID":"34fd9714-3561-4cc7-9713-9f2788bf5ee4","Type":"ContainerStarted","Data":"795e2c092ed2fde94b387485299cc76eb7fd3c1d84bd2e4b7134ba8ff92918cf"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.237460 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.242575 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" podStartSLOduration=5.188505918 podStartE2EDuration="16.242561554s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.690559171 +0000 UTC m=+1022.387963950" lastFinishedPulling="2025-11-23 07:07:29.744614807 +0000 UTC m=+1033.442019586" observedRunningTime="2025-11-23 07:07:32.241914668 +0000 UTC m=+1035.939319447" watchObservedRunningTime="2025-11-23 07:07:32.242561554 +0000 UTC m=+1035.939966333" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.243006 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" event={"ID":"b31fd080-61d4-4dea-a594-932fbbccf98b","Type":"ContainerStarted","Data":"3aee3c7f4606709ef8e2591d3fe32f76e40856d14db0603be1bfb892bc0c78c8"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.243160 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.248671 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" event={"ID":"0755ad7e-aa96-4555-a4d8-dffb11e45807","Type":"ContainerStarted","Data":"505df541f66a4a9aa7f89c5d7fae01e02f9aa63f82d19bdc4a18abfbb249bf6e"} Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.248717 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.266410 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" podStartSLOduration=5.146738374 podStartE2EDuration="16.266393482s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.628417973 +0000 UTC m=+1022.325822752" lastFinishedPulling="2025-11-23 07:07:29.748073091 +0000 UTC m=+1033.445477860" observedRunningTime="2025-11-23 07:07:32.261856832 +0000 UTC m=+1035.959261611" watchObservedRunningTime="2025-11-23 07:07:32.266393482 +0000 UTC m=+1035.963798251" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.282523 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" podStartSLOduration=5.126362379 podStartE2EDuration="16.282508733s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.590290147 +0000 UTC m=+1022.287694916" lastFinishedPulling="2025-11-23 07:07:29.746436491 +0000 UTC m=+1033.443841270" observedRunningTime="2025-11-23 07:07:32.280347551 +0000 UTC m=+1035.977752330" watchObservedRunningTime="2025-11-23 07:07:32.282508733 +0000 UTC m=+1035.979913512" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.302133 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" podStartSLOduration=4.708042018 podStartE2EDuration="16.302115399s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.153985209 +0000 UTC m=+1021.851389988" lastFinishedPulling="2025-11-23 07:07:29.74805859 +0000 UTC m=+1033.445463369" observedRunningTime="2025-11-23 07:07:32.297127638 +0000 UTC m=+1035.994532417" watchObservedRunningTime="2025-11-23 07:07:32.302115399 +0000 UTC m=+1035.999520178" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.319610 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" podStartSLOduration=5.361856544 podStartE2EDuration="16.319591153s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.788173729 +0000 UTC m=+1022.485578498" lastFinishedPulling="2025-11-23 07:07:29.745908328 +0000 UTC m=+1033.443313107" observedRunningTime="2025-11-23 07:07:32.315896973 +0000 UTC m=+1036.013301752" watchObservedRunningTime="2025-11-23 07:07:32.319591153 +0000 UTC m=+1036.016995932" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.339914 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" podStartSLOduration=6.02348848 podStartE2EDuration="16.339896636s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:19.428383935 +0000 UTC m=+1023.125788714" lastFinishedPulling="2025-11-23 07:07:29.744792091 +0000 UTC m=+1033.442196870" observedRunningTime="2025-11-23 07:07:32.337313723 +0000 UTC m=+1036.034718502" watchObservedRunningTime="2025-11-23 07:07:32.339896636 +0000 UTC m=+1036.037301415" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.354939 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" podStartSLOduration=5.648764617 podStartE2EDuration="16.35491581s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:19.040301298 +0000 UTC m=+1022.737706077" lastFinishedPulling="2025-11-23 07:07:29.746452481 +0000 UTC m=+1033.443857270" observedRunningTime="2025-11-23 07:07:32.35162969 +0000 UTC m=+1036.049034469" watchObservedRunningTime="2025-11-23 07:07:32.35491581 +0000 UTC m=+1036.052320579" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.375655 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" podStartSLOduration=5.365869292 podStartE2EDuration="16.375636573s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.734827415 +0000 UTC m=+1022.432232194" lastFinishedPulling="2025-11-23 07:07:29.744594696 +0000 UTC m=+1033.441999475" observedRunningTime="2025-11-23 07:07:32.371935613 +0000 UTC m=+1036.069340392" watchObservedRunningTime="2025-11-23 07:07:32.375636573 +0000 UTC m=+1036.073041352" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.411720 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" podStartSLOduration=4.641409981 podStartE2EDuration="16.411695618s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:17.974543345 +0000 UTC m=+1021.671948124" lastFinishedPulling="2025-11-23 07:07:29.744828982 +0000 UTC m=+1033.442233761" observedRunningTime="2025-11-23 07:07:32.403651483 +0000 UTC m=+1036.101056262" watchObservedRunningTime="2025-11-23 07:07:32.411695618 +0000 UTC m=+1036.109100397" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.422156 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" podStartSLOduration=4.954637692 podStartE2EDuration="16.422138281s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.277379144 +0000 UTC m=+1021.974783923" lastFinishedPulling="2025-11-23 07:07:29.744879733 +0000 UTC m=+1033.442284512" observedRunningTime="2025-11-23 07:07:32.421846884 +0000 UTC m=+1036.119251673" watchObservedRunningTime="2025-11-23 07:07:32.422138281 +0000 UTC m=+1036.119543050" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.438451 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" podStartSLOduration=5.423385207 podStartE2EDuration="16.438437947s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.729810663 +0000 UTC m=+1022.427215442" lastFinishedPulling="2025-11-23 07:07:29.744863403 +0000 UTC m=+1033.442268182" observedRunningTime="2025-11-23 07:07:32.436411278 +0000 UTC m=+1036.133816047" watchObservedRunningTime="2025-11-23 07:07:32.438437947 +0000 UTC m=+1036.135842726" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.453959 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" podStartSLOduration=5.439234212 podStartE2EDuration="16.453926873s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.730147981 +0000 UTC m=+1022.427552760" lastFinishedPulling="2025-11-23 07:07:29.744840642 +0000 UTC m=+1033.442245421" observedRunningTime="2025-11-23 07:07:32.451225167 +0000 UTC m=+1036.148629956" watchObservedRunningTime="2025-11-23 07:07:32.453926873 +0000 UTC m=+1036.151331652" Nov 23 07:07:32 crc kubenswrapper[5028]: I1123 07:07:32.473683 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" podStartSLOduration=5.45518775 podStartE2EDuration="16.473657922s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.729990708 +0000 UTC m=+1022.427395487" lastFinishedPulling="2025-11-23 07:07:29.74846088 +0000 UTC m=+1033.445865659" observedRunningTime="2025-11-23 07:07:32.46987733 +0000 UTC m=+1036.167282109" watchObservedRunningTime="2025-11-23 07:07:32.473657922 +0000 UTC m=+1036.171062701" Nov 23 07:07:33 crc kubenswrapper[5028]: I1123 07:07:33.255811 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:36 crc kubenswrapper[5028]: I1123 07:07:36.547630 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-zr9nj" Nov 23 07:07:36 crc kubenswrapper[5028]: I1123 07:07:36.564285 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-bwdxx" Nov 23 07:07:36 crc kubenswrapper[5028]: I1123 07:07:36.598835 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-fr59d" Nov 23 07:07:36 crc kubenswrapper[5028]: I1123 07:07:36.628589 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-85pdk" Nov 23 07:07:36 crc kubenswrapper[5028]: I1123 07:07:36.670854 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-6cwkk" Nov 23 07:07:36 crc kubenswrapper[5028]: I1123 07:07:36.959195 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-r9zdz" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.014223 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-tl2zc" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.028713 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-f2nqb" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.098777 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-g4fnt" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.137344 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nbtch" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.171219 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-86q68" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.229315 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-67p4j" Nov 23 07:07:37 crc kubenswrapper[5028]: I1123 07:07:37.229724 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-bczft" Nov 23 07:07:38 crc kubenswrapper[5028]: I1123 07:07:38.340994 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-gphz8" Nov 23 07:07:38 crc kubenswrapper[5028]: I1123 07:07:38.700993 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.341421 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" event={"ID":"c2127314-0ad9-46fe-946e-a738b8bdcd12","Type":"ContainerStarted","Data":"cb641744e0f84935082101e98576b5590a7236b75ada54b997c58786cfdc1535"} Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.342348 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.343609 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" event={"ID":"00ecba7a-6f06-4513-9c6b-239606cc6462","Type":"ContainerStarted","Data":"20378119e03b9af5b743965d87d98dedbca786f33f555de06d884f79f9958035"} Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.343805 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.347273 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" event={"ID":"8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa","Type":"ContainerStarted","Data":"09d3f40359d06640ea52a5f0232b316c17a3da63939135fb0453555c5ff48110"} Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.347449 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.349222 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" event={"ID":"681da997-6aae-43ee-9b25-3307858c63c3","Type":"ContainerStarted","Data":"4239762defc3bbf893a06af7b2586792a34f7c4c5eb9523574a4660938969478"} Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.349433 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.352311 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" event={"ID":"298c6ef1-dee2-4a62-a228-aa55fcbfa1b6","Type":"ContainerStarted","Data":"f5d06baf180322f2dd3a7443655f8d327ea90e273782f258a4d842699aaed44a"} Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.353015 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.354799 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" event={"ID":"400d2d41-03cf-4d6d-966b-c1676ec373d6","Type":"ContainerStarted","Data":"57ed05260c831086aaaafb99a210a5786d5252b3734da8597d0f099e28341e30"} Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.355313 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.366430 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" podStartSLOduration=4.075486758 podStartE2EDuration="29.366406427s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.768635715 +0000 UTC m=+1022.466040494" lastFinishedPulling="2025-11-23 07:07:44.059555384 +0000 UTC m=+1047.756960163" observedRunningTime="2025-11-23 07:07:45.363556048 +0000 UTC m=+1049.060960827" watchObservedRunningTime="2025-11-23 07:07:45.366406427 +0000 UTC m=+1049.063811226" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.382404 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" podStartSLOduration=4.067810832 podStartE2EDuration="29.382376385s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.792212087 +0000 UTC m=+1022.489616866" lastFinishedPulling="2025-11-23 07:07:44.10677764 +0000 UTC m=+1047.804182419" observedRunningTime="2025-11-23 07:07:45.380691684 +0000 UTC m=+1049.078096473" watchObservedRunningTime="2025-11-23 07:07:45.382376385 +0000 UTC m=+1049.079781164" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.397697 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" podStartSLOduration=4.107147327 podStartE2EDuration="29.397675906s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.769752142 +0000 UTC m=+1022.467156921" lastFinishedPulling="2025-11-23 07:07:44.060280721 +0000 UTC m=+1047.757685500" observedRunningTime="2025-11-23 07:07:45.396438986 +0000 UTC m=+1049.093843765" watchObservedRunningTime="2025-11-23 07:07:45.397675906 +0000 UTC m=+1049.095080685" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.419284 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" podStartSLOduration=4.67676735 podStartE2EDuration="29.4192602s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.838241244 +0000 UTC m=+1022.535646023" lastFinishedPulling="2025-11-23 07:07:43.580734084 +0000 UTC m=+1047.278138873" observedRunningTime="2025-11-23 07:07:45.414202567 +0000 UTC m=+1049.111607366" watchObservedRunningTime="2025-11-23 07:07:45.4192602 +0000 UTC m=+1049.116664989" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.435170 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" podStartSLOduration=4.126421625 podStartE2EDuration="29.435147075s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.798825538 +0000 UTC m=+1022.496230317" lastFinishedPulling="2025-11-23 07:07:44.107550988 +0000 UTC m=+1047.804955767" observedRunningTime="2025-11-23 07:07:45.430500153 +0000 UTC m=+1049.127904942" watchObservedRunningTime="2025-11-23 07:07:45.435147075 +0000 UTC m=+1049.132551854" Nov 23 07:07:45 crc kubenswrapper[5028]: I1123 07:07:45.450390 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" podStartSLOduration=4.157370926 podStartE2EDuration="29.450369085s" podCreationTimestamp="2025-11-23 07:07:16 +0000 UTC" firstStartedPulling="2025-11-23 07:07:18.838935011 +0000 UTC m=+1022.536339790" lastFinishedPulling="2025-11-23 07:07:44.13193317 +0000 UTC m=+1047.829337949" observedRunningTime="2025-11-23 07:07:45.446351637 +0000 UTC m=+1049.143756416" watchObservedRunningTime="2025-11-23 07:07:45.450369085 +0000 UTC m=+1049.147773864" Nov 23 07:07:56 crc kubenswrapper[5028]: I1123 07:07:56.771205 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-hscqp" Nov 23 07:07:57 crc kubenswrapper[5028]: I1123 07:07:57.083574 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-8qf7p" Nov 23 07:07:57 crc kubenswrapper[5028]: I1123 07:07:57.171056 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-jtt25" Nov 23 07:07:57 crc kubenswrapper[5028]: I1123 07:07:57.275803 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-nfwpv" Nov 23 07:07:57 crc kubenswrapper[5028]: I1123 07:07:57.287938 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-vwblx" Nov 23 07:07:57 crc kubenswrapper[5028]: I1123 07:07:57.609092 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-mjvvx" Nov 23 07:08:00 crc kubenswrapper[5028]: I1123 07:08:00.946529 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:08:00 crc kubenswrapper[5028]: I1123 07:08:00.947101 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:08:00 crc kubenswrapper[5028]: I1123 07:08:00.947165 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:08:00 crc kubenswrapper[5028]: I1123 07:08:00.947843 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:08:00 crc kubenswrapper[5028]: I1123 07:08:00.947912 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd" gracePeriod=600 Nov 23 07:08:01 crc kubenswrapper[5028]: E1123 07:08:01.093148 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa1c051a_31cd_4dd3_9be8_6194822c2273.slice/crio-153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa1c051a_31cd_4dd3_9be8_6194822c2273.slice/crio-conmon-153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:08:01 crc kubenswrapper[5028]: I1123 07:08:01.482673 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd" exitCode=0 Nov 23 07:08:01 crc kubenswrapper[5028]: I1123 07:08:01.482720 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd"} Nov 23 07:08:01 crc kubenswrapper[5028]: I1123 07:08:01.483115 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"fad5b43d5150bb13c86e0639fa99a9b8f8c637943306815b8e96f42f58e277f1"} Nov 23 07:08:01 crc kubenswrapper[5028]: I1123 07:08:01.483144 5028 scope.go:117] "RemoveContainer" containerID="7ae2ca72370a9e3e15bd4e9680a68748662e74c8a868e4dcea0405be9f5e30cb" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.838677 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-h94tr"] Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.840758 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.844783 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.844824 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.845363 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.845593 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gp877" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.875519 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-h94tr"] Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.884019 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-nc986"] Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.885558 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.888280 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 07:08:11 crc kubenswrapper[5028]: I1123 07:08:11.910801 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-nc986"] Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.006538 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-dns-svc\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.006637 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282fa862-6460-463b-89e0-b05f09b9036d-config\") pod \"dnsmasq-dns-7bdd77c89-h94tr\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.006719 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqxd\" (UniqueName: \"kubernetes.io/projected/737c28ab-1c2c-491a-9d21-35f334f38222-kube-api-access-2zqxd\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.006765 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzs9s\" (UniqueName: \"kubernetes.io/projected/282fa862-6460-463b-89e0-b05f09b9036d-kube-api-access-xzs9s\") pod \"dnsmasq-dns-7bdd77c89-h94tr\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.006793 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-config\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.108203 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqxd\" (UniqueName: \"kubernetes.io/projected/737c28ab-1c2c-491a-9d21-35f334f38222-kube-api-access-2zqxd\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.108282 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzs9s\" (UniqueName: \"kubernetes.io/projected/282fa862-6460-463b-89e0-b05f09b9036d-kube-api-access-xzs9s\") pod \"dnsmasq-dns-7bdd77c89-h94tr\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.108362 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-config\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.108421 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-dns-svc\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.108470 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282fa862-6460-463b-89e0-b05f09b9036d-config\") pod \"dnsmasq-dns-7bdd77c89-h94tr\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.109787 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282fa862-6460-463b-89e0-b05f09b9036d-config\") pod \"dnsmasq-dns-7bdd77c89-h94tr\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.109789 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-dns-svc\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.109792 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-config\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.133170 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqxd\" (UniqueName: \"kubernetes.io/projected/737c28ab-1c2c-491a-9d21-35f334f38222-kube-api-access-2zqxd\") pod \"dnsmasq-dns-6584b49599-nc986\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.133354 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzs9s\" (UniqueName: \"kubernetes.io/projected/282fa862-6460-463b-89e0-b05f09b9036d-kube-api-access-xzs9s\") pod \"dnsmasq-dns-7bdd77c89-h94tr\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.171131 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.205100 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.653972 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-h94tr"] Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.655752 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:08:12 crc kubenswrapper[5028]: I1123 07:08:12.735179 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-nc986"] Nov 23 07:08:12 crc kubenswrapper[5028]: W1123 07:08:12.737207 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod737c28ab_1c2c_491a_9d21_35f334f38222.slice/crio-11b80cdf6c9093618732c31fec74f11bbce9f6b2ad4355499a88cfb395653f09 WatchSource:0}: Error finding container 11b80cdf6c9093618732c31fec74f11bbce9f6b2ad4355499a88cfb395653f09: Status 404 returned error can't find the container with id 11b80cdf6c9093618732c31fec74f11bbce9f6b2ad4355499a88cfb395653f09 Nov 23 07:08:13 crc kubenswrapper[5028]: I1123 07:08:13.575044 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-nc986" event={"ID":"737c28ab-1c2c-491a-9d21-35f334f38222","Type":"ContainerStarted","Data":"11b80cdf6c9093618732c31fec74f11bbce9f6b2ad4355499a88cfb395653f09"} Nov 23 07:08:13 crc kubenswrapper[5028]: I1123 07:08:13.579163 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" event={"ID":"282fa862-6460-463b-89e0-b05f09b9036d","Type":"ContainerStarted","Data":"ca3fd42413dc925cbc6bf221c070e5dc1fd2fec1baf2dab69aaef27c6a5c6437"} Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.585652 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-nc986"] Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.618776 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-szt9d"] Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.644355 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.667848 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-szt9d"] Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.748980 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f9mf\" (UniqueName: \"kubernetes.io/projected/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-kube-api-access-9f9mf\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.749050 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.749071 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-config\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.851922 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f9mf\" (UniqueName: \"kubernetes.io/projected/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-kube-api-access-9f9mf\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.852037 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.852060 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-config\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.853101 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-config\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.853149 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.884498 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f9mf\" (UniqueName: \"kubernetes.io/projected/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-kube-api-access-9f9mf\") pod \"dnsmasq-dns-7c6d9948dc-szt9d\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:14 crc kubenswrapper[5028]: I1123 07:08:14.988830 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.056525 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-h94tr"] Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.100721 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-k6glv"] Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.105509 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-k6glv"] Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.105622 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.162457 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-config\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.162548 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz4pm\" (UniqueName: \"kubernetes.io/projected/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-kube-api-access-mz4pm\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.162673 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-dns-svc\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.264389 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz4pm\" (UniqueName: \"kubernetes.io/projected/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-kube-api-access-mz4pm\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.264758 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-dns-svc\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.264782 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-config\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.265656 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-config\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.266300 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-dns-svc\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.282589 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz4pm\" (UniqueName: \"kubernetes.io/projected/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-kube-api-access-mz4pm\") pod \"dnsmasq-dns-6486446b9f-k6glv\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.473304 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.595600 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-szt9d"] Nov 23 07:08:15 crc kubenswrapper[5028]: W1123 07:08:15.630801 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffdc443a_4ba3_4b2b_9966_be5bcbf037d5.slice/crio-bf150b39b0fd113b6ad5caacf4578e88d50f115fb6ea839179c82a7be153f1fc WatchSource:0}: Error finding container bf150b39b0fd113b6ad5caacf4578e88d50f115fb6ea839179c82a7be153f1fc: Status 404 returned error can't find the container with id bf150b39b0fd113b6ad5caacf4578e88d50f115fb6ea839179c82a7be153f1fc Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.871632 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.873772 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.891390 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.891608 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.895793 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r654g" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.895986 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.897048 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.898022 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.898604 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.904739 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.965513 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-k6glv"] Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978648 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978689 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978709 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978728 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978752 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8399afb1-fbd2-4ce0-b980-46b317d6cfee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978773 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h7ph\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-kube-api-access-9h7ph\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.978967 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.979005 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.979042 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8399afb1-fbd2-4ce0-b980-46b317d6cfee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.979067 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:15 crc kubenswrapper[5028]: I1123 07:08:15.979151 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081542 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081663 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081694 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081721 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081744 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081781 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8399afb1-fbd2-4ce0-b980-46b317d6cfee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h7ph\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-kube-api-access-9h7ph\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081912 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.081935 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.082195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8399afb1-fbd2-4ce0-b980-46b317d6cfee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.082229 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.083823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.084044 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.084208 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.084277 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.084599 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.084848 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.090109 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.090609 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.100512 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8399afb1-fbd2-4ce0-b980-46b317d6cfee-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.105520 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h7ph\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-kube-api-access-9h7ph\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.107506 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8399afb1-fbd2-4ce0-b980-46b317d6cfee-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.115881 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.194506 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.198600 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.201550 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.201621 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8xz7h" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.201904 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.201910 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.202054 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.202073 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.202261 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.232688 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.248105 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.385512 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386071 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386110 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386138 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386168 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386329 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386374 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49w2\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-kube-api-access-t49w2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386458 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386534 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.386578 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488619 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488666 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488691 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488707 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488747 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488765 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488782 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t49w2\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-kube-api-access-t49w2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488815 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488860 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488886 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.488932 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.489059 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.489913 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.491235 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.491281 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.491437 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.492678 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.497105 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.497306 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.497467 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.501410 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.509666 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t49w2\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-kube-api-access-t49w2\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.522348 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.639315 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" event={"ID":"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c","Type":"ContainerStarted","Data":"72e2b3770f88a9a22a16608ec02cf03f9128aa9987dd47abf93b9f5a1a55df06"} Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.640397 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" event={"ID":"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5","Type":"ContainerStarted","Data":"bf150b39b0fd113b6ad5caacf4578e88d50f115fb6ea839179c82a7be153f1fc"} Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.763833 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:08:16 crc kubenswrapper[5028]: W1123 07:08:16.775497 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8399afb1_fbd2_4ce0_b980_46b317d6cfee.slice/crio-be7964c75ec89318e27a0a2810585c7f6186df68011439570c0df8cecfaddff7 WatchSource:0}: Error finding container be7964c75ec89318e27a0a2810585c7f6186df68011439570c0df8cecfaddff7: Status 404 returned error can't find the container with id be7964c75ec89318e27a0a2810585c7f6186df68011439570c0df8cecfaddff7 Nov 23 07:08:16 crc kubenswrapper[5028]: I1123 07:08:16.824290 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.163047 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.165101 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.173894 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.174451 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.175470 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rr7xq" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.177060 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.177611 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.185915 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.278098 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304351 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-default\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304414 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-generated\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304440 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304472 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304490 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgp8\" (UniqueName: \"kubernetes.io/projected/88c70fc5-621a-45c5-bcf1-716d14e48792-kube-api-access-cvgp8\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304518 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304552 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-operator-scripts\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.304603 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-kolla-config\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406184 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-default\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406242 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-generated\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406264 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406294 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406314 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgp8\" (UniqueName: \"kubernetes.io/projected/88c70fc5-621a-45c5-bcf1-716d14e48792-kube-api-access-cvgp8\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406369 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-operator-scripts\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.406420 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-kolla-config\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.407213 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-kolla-config\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.407407 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-default\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.407497 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-generated\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.407719 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.410246 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-operator-scripts\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.419022 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.430829 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.452534 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgp8\" (UniqueName: \"kubernetes.io/projected/88c70fc5-621a-45c5-bcf1-716d14e48792-kube-api-access-cvgp8\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.460368 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.489238 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.653717 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8399afb1-fbd2-4ce0-b980-46b317d6cfee","Type":"ContainerStarted","Data":"be7964c75ec89318e27a0a2810585c7f6186df68011439570c0df8cecfaddff7"} Nov 23 07:08:17 crc kubenswrapper[5028]: I1123 07:08:17.658538 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7","Type":"ContainerStarted","Data":"08e66610e24174c7e42cf6ba43bd7183bda3dd3c5bb57c8d1313ffe879923d19"} Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.012061 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:08:18 crc kubenswrapper[5028]: W1123 07:08:18.017793 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88c70fc5_621a_45c5_bcf1_716d14e48792.slice/crio-1b54da68b1421bdfab168a510797e851397eb5fe8e0a19a20a7f370e7d47cc08 WatchSource:0}: Error finding container 1b54da68b1421bdfab168a510797e851397eb5fe8e0a19a20a7f370e7d47cc08: Status 404 returned error can't find the container with id 1b54da68b1421bdfab168a510797e851397eb5fe8e0a19a20a7f370e7d47cc08 Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.668464 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"88c70fc5-621a-45c5-bcf1-716d14e48792","Type":"ContainerStarted","Data":"1b54da68b1421bdfab168a510797e851397eb5fe8e0a19a20a7f370e7d47cc08"} Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.807837 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.809445 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.813429 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.813696 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.813863 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-r5xkm" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.814050 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.835560 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.913815 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.915358 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.918650 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.918879 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.919070 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ptvxk" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.924565 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.932840 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.932924 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.933007 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.933057 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.933073 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.933166 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.933187 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q78l2\" (UniqueName: \"kubernetes.io/projected/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kube-api-access-q78l2\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:18 crc kubenswrapper[5028]: I1123 07:08:18.933238 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035281 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035367 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035404 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kolla-config\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035437 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035471 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9nv\" (UniqueName: \"kubernetes.io/projected/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kube-api-access-tx9nv\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035522 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035544 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035571 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035602 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-config-data\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035622 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035644 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035664 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q78l2\" (UniqueName: \"kubernetes.io/projected/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kube-api-access-q78l2\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.035704 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.036688 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.040523 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.052419 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.052747 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.053549 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.062451 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.065556 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q78l2\" (UniqueName: \"kubernetes.io/projected/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kube-api-access-q78l2\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.086473 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.092161 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.138455 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kolla-config\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.140397 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx9nv\" (UniqueName: \"kubernetes.io/projected/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kube-api-access-tx9nv\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.139211 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kolla-config\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.140794 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.141282 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-config-data\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.141307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.142162 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-config-data\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.146820 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.147169 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.170614 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx9nv\" (UniqueName: \"kubernetes.io/projected/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kube-api-access-tx9nv\") pod \"memcached-0\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " pod="openstack/memcached-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.180018 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:19 crc kubenswrapper[5028]: I1123 07:08:19.234309 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.519017 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.520481 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.526289 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dfq52" Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.529486 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.584508 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49gcd\" (UniqueName: \"kubernetes.io/projected/41d6b075-2987-431a-b6d2-2842bc5726de-kube-api-access-49gcd\") pod \"kube-state-metrics-0\" (UID: \"41d6b075-2987-431a-b6d2-2842bc5726de\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.686457 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49gcd\" (UniqueName: \"kubernetes.io/projected/41d6b075-2987-431a-b6d2-2842bc5726de-kube-api-access-49gcd\") pod \"kube-state-metrics-0\" (UID: \"41d6b075-2987-431a-b6d2-2842bc5726de\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.709176 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49gcd\" (UniqueName: \"kubernetes.io/projected/41d6b075-2987-431a-b6d2-2842bc5726de-kube-api-access-49gcd\") pod \"kube-state-metrics-0\" (UID: \"41d6b075-2987-431a-b6d2-2842bc5726de\") " pod="openstack/kube-state-metrics-0" Nov 23 07:08:21 crc kubenswrapper[5028]: I1123 07:08:21.844363 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:08:24 crc kubenswrapper[5028]: I1123 07:08:24.975705 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7xfsr"] Nov 23 07:08:24 crc kubenswrapper[5028]: I1123 07:08:24.976970 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:24 crc kubenswrapper[5028]: I1123 07:08:24.981337 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 23 07:08:24 crc kubenswrapper[5028]: I1123 07:08:24.981449 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 23 07:08:24 crc kubenswrapper[5028]: I1123 07:08:24.981625 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2n9xw" Nov 23 07:08:24 crc kubenswrapper[5028]: I1123 07:08:24.995304 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xfsr"] Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.038147 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-5cm8v"] Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.039752 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.052959 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-ovn-controller-tls-certs\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.053029 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run-ovn\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.053330 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-log-ovn\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.053382 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8856612c-6e19-4bcc-86ab-f5fd8f75896b-scripts\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.053469 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.053528 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-combined-ca-bundle\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.053617 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbpp\" (UniqueName: \"kubernetes.io/projected/8856612c-6e19-4bcc-86ab-f5fd8f75896b-kube-api-access-ntbpp\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.065490 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cm8v"] Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.112499 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.114280 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.118332 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.118736 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f8v2k" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.118934 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.119140 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.119280 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.133255 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155298 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run-ovn\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155349 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-log-ovn\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155378 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8856612c-6e19-4bcc-86ab-f5fd8f75896b-scripts\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155414 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155440 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-combined-ca-bundle\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155524 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wxft\" (UniqueName: \"kubernetes.io/projected/595ec560-1f5a-44f8-bf67-feee6223a090-kube-api-access-6wxft\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155544 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnt45\" (UniqueName: \"kubernetes.io/projected/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-kube-api-access-rnt45\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155595 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-run\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155613 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155630 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-etc-ovs\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155651 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-config\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155673 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbpp\" (UniqueName: \"kubernetes.io/projected/8856612c-6e19-4bcc-86ab-f5fd8f75896b-kube-api-access-ntbpp\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155690 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155736 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155766 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155791 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-ovn-controller-tls-certs\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155819 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155842 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-lib\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155871 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155892 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-log\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.155921 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-scripts\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.156936 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run-ovn\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.157133 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-log-ovn\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.158597 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.161537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8856612c-6e19-4bcc-86ab-f5fd8f75896b-scripts\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.164910 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-ovn-controller-tls-certs\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.168309 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-combined-ca-bundle\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.173661 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbpp\" (UniqueName: \"kubernetes.io/projected/8856612c-6e19-4bcc-86ab-f5fd8f75896b-kube-api-access-ntbpp\") pod \"ovn-controller-7xfsr\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.257787 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.257834 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-lib\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.257855 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.257876 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-log\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.257905 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-scripts\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258009 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wxft\" (UniqueName: \"kubernetes.io/projected/595ec560-1f5a-44f8-bf67-feee6223a090-kube-api-access-6wxft\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258024 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnt45\" (UniqueName: \"kubernetes.io/projected/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-kube-api-access-rnt45\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258056 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-run\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258071 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-etc-ovs\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258085 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258101 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-config\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258123 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258145 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.258170 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.259576 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-log\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.260095 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.260283 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-etc-ovs\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.261314 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.261366 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-run\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.261516 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-lib\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.261866 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.262061 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-config\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.263072 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-scripts\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.263300 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.272682 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.273419 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.276534 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnt45\" (UniqueName: \"kubernetes.io/projected/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-kube-api-access-rnt45\") pod \"ovn-controller-ovs-5cm8v\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.289126 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wxft\" (UniqueName: \"kubernetes.io/projected/595ec560-1f5a-44f8-bf67-feee6223a090-kube-api-access-6wxft\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.305698 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.322317 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.353979 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:25 crc kubenswrapper[5028]: I1123 07:08:25.451241 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.574732 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.576347 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.580640 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sc5q2" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.585981 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.586322 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.586468 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.597599 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620034 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620363 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-config\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620383 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620450 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620483 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620513 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620532 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hf4\" (UniqueName: \"kubernetes.io/projected/b05065f0-2269-4a88-abdf-45d2523ac60b-kube-api-access-27hf4\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.620583 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.721843 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722243 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722275 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27hf4\" (UniqueName: \"kubernetes.io/projected/b05065f0-2269-4a88-abdf-45d2523ac60b-kube-api-access-27hf4\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722389 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722411 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-config\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722429 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722484 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722671 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.722882 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.723589 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.723737 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-config\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.728727 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.728897 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.731117 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.740901 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27hf4\" (UniqueName: \"kubernetes.io/projected/b05065f0-2269-4a88-abdf-45d2523ac60b-kube-api-access-27hf4\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.749150 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:28 crc kubenswrapper[5028]: I1123 07:08:28.916214 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:30 crc kubenswrapper[5028]: E1123 07:08:30.835757 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 23 07:08:30 crc kubenswrapper[5028]: E1123 07:08:30.837289 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zqxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-nc986_openstack(737c28ab-1c2c-491a-9d21-35f334f38222): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:08:30 crc kubenswrapper[5028]: E1123 07:08:30.838934 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-nc986" podUID="737c28ab-1c2c-491a-9d21-35f334f38222" Nov 23 07:08:30 crc kubenswrapper[5028]: E1123 07:08:30.845582 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 23 07:08:30 crc kubenswrapper[5028]: E1123 07:08:30.845704 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzs9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-h94tr_openstack(282fa862-6460-463b-89e0-b05f09b9036d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:08:30 crc kubenswrapper[5028]: E1123 07:08:30.847296 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" podUID="282fa862-6460-463b-89e0-b05f09b9036d" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.210727 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.224923 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.363138 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzs9s\" (UniqueName: \"kubernetes.io/projected/282fa862-6460-463b-89e0-b05f09b9036d-kube-api-access-xzs9s\") pod \"282fa862-6460-463b-89e0-b05f09b9036d\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.363610 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-config\") pod \"737c28ab-1c2c-491a-9d21-35f334f38222\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.363645 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282fa862-6460-463b-89e0-b05f09b9036d-config\") pod \"282fa862-6460-463b-89e0-b05f09b9036d\" (UID: \"282fa862-6460-463b-89e0-b05f09b9036d\") " Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.363686 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-dns-svc\") pod \"737c28ab-1c2c-491a-9d21-35f334f38222\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.363710 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zqxd\" (UniqueName: \"kubernetes.io/projected/737c28ab-1c2c-491a-9d21-35f334f38222-kube-api-access-2zqxd\") pod \"737c28ab-1c2c-491a-9d21-35f334f38222\" (UID: \"737c28ab-1c2c-491a-9d21-35f334f38222\") " Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.364341 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-config" (OuterVolumeSpecName: "config") pod "737c28ab-1c2c-491a-9d21-35f334f38222" (UID: "737c28ab-1c2c-491a-9d21-35f334f38222"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.364911 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282fa862-6460-463b-89e0-b05f09b9036d-config" (OuterVolumeSpecName: "config") pod "282fa862-6460-463b-89e0-b05f09b9036d" (UID: "282fa862-6460-463b-89e0-b05f09b9036d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.365121 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "737c28ab-1c2c-491a-9d21-35f334f38222" (UID: "737c28ab-1c2c-491a-9d21-35f334f38222"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.370073 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282fa862-6460-463b-89e0-b05f09b9036d-kube-api-access-xzs9s" (OuterVolumeSpecName: "kube-api-access-xzs9s") pod "282fa862-6460-463b-89e0-b05f09b9036d" (UID: "282fa862-6460-463b-89e0-b05f09b9036d"). InnerVolumeSpecName "kube-api-access-xzs9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.370196 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737c28ab-1c2c-491a-9d21-35f334f38222-kube-api-access-2zqxd" (OuterVolumeSpecName: "kube-api-access-2zqxd") pod "737c28ab-1c2c-491a-9d21-35f334f38222" (UID: "737c28ab-1c2c-491a-9d21-35f334f38222"). InnerVolumeSpecName "kube-api-access-2zqxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.465463 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.465495 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/282fa862-6460-463b-89e0-b05f09b9036d-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.465505 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/737c28ab-1c2c-491a-9d21-35f334f38222-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.465515 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zqxd\" (UniqueName: \"kubernetes.io/projected/737c28ab-1c2c-491a-9d21-35f334f38222-kube-api-access-2zqxd\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.465526 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzs9s\" (UniqueName: \"kubernetes.io/projected/282fa862-6460-463b-89e0-b05f09b9036d-kube-api-access-xzs9s\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.476477 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 07:08:34 crc kubenswrapper[5028]: W1123 07:08:34.569684 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0cd2be3_d4e5_4ed7_80f9_54bc15ee3c50.slice/crio-86f5110817112f98f47b72f5219df5bb46c35adad37d1b330b154abe6695bc39 WatchSource:0}: Error finding container 86f5110817112f98f47b72f5219df5bb46c35adad37d1b330b154abe6695bc39: Status 404 returned error can't find the container with id 86f5110817112f98f47b72f5219df5bb46c35adad37d1b330b154abe6695bc39 Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.714148 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.719762 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:08:34 crc kubenswrapper[5028]: W1123 07:08:34.729351 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41d6b075_2987_431a_b6d2_2842bc5726de.slice/crio-d3672ae7d44eeea7030585574b4aeda23e9423701dcce98b2fa5fa236d280435 WatchSource:0}: Error finding container d3672ae7d44eeea7030585574b4aeda23e9423701dcce98b2fa5fa236d280435: Status 404 returned error can't find the container with id d3672ae7d44eeea7030585574b4aeda23e9423701dcce98b2fa5fa236d280435 Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.749400 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xfsr"] Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.801282 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50","Type":"ContainerStarted","Data":"86f5110817112f98f47b72f5219df5bb46c35adad37d1b330b154abe6695bc39"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.804003 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41d6b075-2987-431a-b6d2-2842bc5726de","Type":"ContainerStarted","Data":"d3672ae7d44eeea7030585574b4aeda23e9423701dcce98b2fa5fa236d280435"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.804886 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-nc986" event={"ID":"737c28ab-1c2c-491a-9d21-35f334f38222","Type":"ContainerDied","Data":"11b80cdf6c9093618732c31fec74f11bbce9f6b2ad4355499a88cfb395653f09"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.804969 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-nc986" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.807603 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"88c70fc5-621a-45c5-bcf1-716d14e48792","Type":"ContainerStarted","Data":"481a7c1912530aa19976454094bc1eafee12f607b50a187d2973b114aebc1b12"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.810936 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" event={"ID":"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5","Type":"ContainerStarted","Data":"12af1423a295c6f478c8785d5aa48a368910a459421fea55e61284425915587f"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.817230 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" event={"ID":"282fa862-6460-463b-89e0-b05f09b9036d","Type":"ContainerDied","Data":"ca3fd42413dc925cbc6bf221c070e5dc1fd2fec1baf2dab69aaef27c6a5c6437"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.817247 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-h94tr" Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.819289 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3","Type":"ContainerStarted","Data":"006ab7ab67f29786ec3083302aa1080c06b96aed92b427f6b754d6543f3a78d1"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.820819 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" event={"ID":"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c","Type":"ContainerStarted","Data":"e94f774274e49deacd01adc7ca38ac5ec13f56f9fa03fa345f51ea85f097dc30"} Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.822570 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:08:34 crc kubenswrapper[5028]: W1123 07:08:34.871976 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595ec560_1f5a_44f8_bf67_feee6223a090.slice/crio-b8736f8c8d7aa53e8f93d55683a720e38fd59d08030678c5d9bfaaefe59bdb9f WatchSource:0}: Error finding container b8736f8c8d7aa53e8f93d55683a720e38fd59d08030678c5d9bfaaefe59bdb9f: Status 404 returned error can't find the container with id b8736f8c8d7aa53e8f93d55683a720e38fd59d08030678c5d9bfaaefe59bdb9f Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.877703 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-nc986"] Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.895340 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-nc986"] Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.908729 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-h94tr"] Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.916559 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-h94tr"] Nov 23 07:08:34 crc kubenswrapper[5028]: I1123 07:08:34.994169 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.066150 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282fa862-6460-463b-89e0-b05f09b9036d" path="/var/lib/kubelet/pods/282fa862-6460-463b-89e0-b05f09b9036d/volumes" Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.066658 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737c28ab-1c2c-491a-9d21-35f334f38222" path="/var/lib/kubelet/pods/737c28ab-1c2c-491a-9d21-35f334f38222/volumes" Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.489705 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-5cm8v"] Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.830567 5028 generic.go:334] "Generic (PLEG): container finished" podID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerID="e94f774274e49deacd01adc7ca38ac5ec13f56f9fa03fa345f51ea85f097dc30" exitCode=0 Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.830625 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" event={"ID":"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c","Type":"ContainerDied","Data":"e94f774274e49deacd01adc7ca38ac5ec13f56f9fa03fa345f51ea85f097dc30"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.833213 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"595ec560-1f5a-44f8-bf67-feee6223a090","Type":"ContainerStarted","Data":"b8736f8c8d7aa53e8f93d55683a720e38fd59d08030678c5d9bfaaefe59bdb9f"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.834692 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerStarted","Data":"8db741f4e8db3546b964be3fed0b6b97acca91e5e24e2c7ecfbe837030e8e22f"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.835706 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr" event={"ID":"8856612c-6e19-4bcc-86ab-f5fd8f75896b","Type":"ContainerStarted","Data":"272f84c189e1b28c413f2511fdddf4d927253b1101562b5e7a320d8b4c18d772"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.837216 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8399afb1-fbd2-4ce0-b980-46b317d6cfee","Type":"ContainerStarted","Data":"8d15922df2cca35d78979c23ab251a4e5ad6c02f4fa23139d560e3bd174d432a"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.840941 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7","Type":"ContainerStarted","Data":"5a0f902d9eb5361838184035de7e15282cacda3c6de606d046603750a68274e8"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.844743 5028 generic.go:334] "Generic (PLEG): container finished" podID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerID="12af1423a295c6f478c8785d5aa48a368910a459421fea55e61284425915587f" exitCode=0 Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.844814 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" event={"ID":"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5","Type":"ContainerDied","Data":"12af1423a295c6f478c8785d5aa48a368910a459421fea55e61284425915587f"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.849784 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3","Type":"ContainerStarted","Data":"5598b842a936be170ee1be87aa71ee10dbeff505eeb40e596a071e95109e33e5"} Nov 23 07:08:35 crc kubenswrapper[5028]: I1123 07:08:35.851214 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05065f0-2269-4a88-abdf-45d2523ac60b","Type":"ContainerStarted","Data":"eaab37fc9a9408fa10e98dcae012556c1b0b13d8daca6339c5dc74f925aef15e"} Nov 23 07:08:38 crc kubenswrapper[5028]: I1123 07:08:38.876975 5028 generic.go:334] "Generic (PLEG): container finished" podID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerID="481a7c1912530aa19976454094bc1eafee12f607b50a187d2973b114aebc1b12" exitCode=0 Nov 23 07:08:38 crc kubenswrapper[5028]: I1123 07:08:38.876994 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"88c70fc5-621a-45c5-bcf1-716d14e48792","Type":"ContainerDied","Data":"481a7c1912530aa19976454094bc1eafee12f607b50a187d2973b114aebc1b12"} Nov 23 07:08:38 crc kubenswrapper[5028]: I1123 07:08:38.880118 5028 generic.go:334] "Generic (PLEG): container finished" podID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerID="5598b842a936be170ee1be87aa71ee10dbeff505eeb40e596a071e95109e33e5" exitCode=0 Nov 23 07:08:38 crc kubenswrapper[5028]: I1123 07:08:38.880162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3","Type":"ContainerDied","Data":"5598b842a936be170ee1be87aa71ee10dbeff505eeb40e596a071e95109e33e5"} Nov 23 07:08:39 crc kubenswrapper[5028]: I1123 07:08:39.890427 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" event={"ID":"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5","Type":"ContainerStarted","Data":"487521214eeb02d9de53ad55fcef5dea8d15920f39d4c5626968c7c0a735b745"} Nov 23 07:08:39 crc kubenswrapper[5028]: I1123 07:08:39.890986 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:39 crc kubenswrapper[5028]: I1123 07:08:39.913229 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" podStartSLOduration=7.403888563 podStartE2EDuration="25.913212135s" podCreationTimestamp="2025-11-23 07:08:14 +0000 UTC" firstStartedPulling="2025-11-23 07:08:15.634497642 +0000 UTC m=+1079.331902431" lastFinishedPulling="2025-11-23 07:08:34.143821204 +0000 UTC m=+1097.841226003" observedRunningTime="2025-11-23 07:08:39.908558472 +0000 UTC m=+1103.605963251" watchObservedRunningTime="2025-11-23 07:08:39.913212135 +0000 UTC m=+1103.610616914" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.900855 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50","Type":"ContainerStarted","Data":"5f2108c80300b01fc1d30c52fcb9398b908685239d7a2e69ca2f22d1daf75c65"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.902039 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.906312 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr" event={"ID":"8856612c-6e19-4bcc-86ab-f5fd8f75896b","Type":"ContainerStarted","Data":"964d01f35d8b75183e2fe8b77259a378a53f4e6342c8d59bbcaf1c413ad35077"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.906435 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7xfsr" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.908278 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41d6b075-2987-431a-b6d2-2842bc5726de","Type":"ContainerStarted","Data":"42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.908709 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.910216 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05065f0-2269-4a88-abdf-45d2523ac60b","Type":"ContainerStarted","Data":"6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.925264 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" event={"ID":"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c","Type":"ContainerStarted","Data":"d5c510e8694c8063ad9c3383683fd4c6601677b404ce17dce2a337a4142de95c"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.925385 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.927869 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"595ec560-1f5a-44f8-bf67-feee6223a090","Type":"ContainerStarted","Data":"27822d21cd40adbf5bb91d6935a6236b638c8372268ebcb48349fef896ec2c52"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.930484 5028 generic.go:334] "Generic (PLEG): container finished" podID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerID="3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19" exitCode=0 Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.930542 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerDied","Data":"3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.934583 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"88c70fc5-621a-45c5-bcf1-716d14e48792","Type":"ContainerStarted","Data":"241f9f536ab38394348af160013fb0390313172f6bde249dc5d6d02b4ba10fb4"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.937580 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3","Type":"ContainerStarted","Data":"def0d1a5d1d2c89fcd962a46215131a3c686adec6de8fc5a5ece7cb87528bfac"} Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.949846 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=18.476586229 podStartE2EDuration="22.949828899s" podCreationTimestamp="2025-11-23 07:08:18 +0000 UTC" firstStartedPulling="2025-11-23 07:08:34.625907693 +0000 UTC m=+1098.323312492" lastFinishedPulling="2025-11-23 07:08:39.099150383 +0000 UTC m=+1102.796555162" observedRunningTime="2025-11-23 07:08:40.922104105 +0000 UTC m=+1104.619508894" watchObservedRunningTime="2025-11-23 07:08:40.949828899 +0000 UTC m=+1104.647233678" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.951535 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7xfsr" podStartSLOduration=12.537035117 podStartE2EDuration="16.951528931s" podCreationTimestamp="2025-11-23 07:08:24 +0000 UTC" firstStartedPulling="2025-11-23 07:08:34.872349413 +0000 UTC m=+1098.569754182" lastFinishedPulling="2025-11-23 07:08:39.286843217 +0000 UTC m=+1102.984247996" observedRunningTime="2025-11-23 07:08:40.947297598 +0000 UTC m=+1104.644702377" watchObservedRunningTime="2025-11-23 07:08:40.951528931 +0000 UTC m=+1104.648933710" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.967285 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.946752011 podStartE2EDuration="19.967266983s" podCreationTimestamp="2025-11-23 07:08:21 +0000 UTC" firstStartedPulling="2025-11-23 07:08:34.732842698 +0000 UTC m=+1098.430247477" lastFinishedPulling="2025-11-23 07:08:39.75335767 +0000 UTC m=+1103.450762449" observedRunningTime="2025-11-23 07:08:40.96014764 +0000 UTC m=+1104.657552419" watchObservedRunningTime="2025-11-23 07:08:40.967266983 +0000 UTC m=+1104.664671762" Nov 23 07:08:40 crc kubenswrapper[5028]: I1123 07:08:40.983908 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" podStartSLOduration=7.838306811 podStartE2EDuration="25.983891897s" podCreationTimestamp="2025-11-23 07:08:15 +0000 UTC" firstStartedPulling="2025-11-23 07:08:15.985181312 +0000 UTC m=+1079.682586091" lastFinishedPulling="2025-11-23 07:08:34.130766398 +0000 UTC m=+1097.828171177" observedRunningTime="2025-11-23 07:08:40.978853605 +0000 UTC m=+1104.676258404" watchObservedRunningTime="2025-11-23 07:08:40.983891897 +0000 UTC m=+1104.681296676" Nov 23 07:08:41 crc kubenswrapper[5028]: I1123 07:08:41.006685 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.882480779 podStartE2EDuration="25.006663511s" podCreationTimestamp="2025-11-23 07:08:16 +0000 UTC" firstStartedPulling="2025-11-23 07:08:18.020456932 +0000 UTC m=+1081.717861711" lastFinishedPulling="2025-11-23 07:08:34.144639664 +0000 UTC m=+1097.842044443" observedRunningTime="2025-11-23 07:08:40.996434992 +0000 UTC m=+1104.693839771" watchObservedRunningTime="2025-11-23 07:08:41.006663511 +0000 UTC m=+1104.704068290" Nov 23 07:08:41 crc kubenswrapper[5028]: I1123 07:08:41.022904 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.022885276 podStartE2EDuration="24.022885276s" podCreationTimestamp="2025-11-23 07:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:08:41.020322213 +0000 UTC m=+1104.717726992" watchObservedRunningTime="2025-11-23 07:08:41.022885276 +0000 UTC m=+1104.720290045" Nov 23 07:08:41 crc kubenswrapper[5028]: I1123 07:08:41.949501 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerStarted","Data":"acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6"} Nov 23 07:08:41 crc kubenswrapper[5028]: I1123 07:08:41.950694 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerStarted","Data":"87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa"} Nov 23 07:08:41 crc kubenswrapper[5028]: I1123 07:08:41.971867 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-5cm8v" podStartSLOduration=13.28929064 podStartE2EDuration="16.971850668s" podCreationTimestamp="2025-11-23 07:08:25 +0000 UTC" firstStartedPulling="2025-11-23 07:08:35.509462534 +0000 UTC m=+1099.206867313" lastFinishedPulling="2025-11-23 07:08:39.192022552 +0000 UTC m=+1102.889427341" observedRunningTime="2025-11-23 07:08:41.970980937 +0000 UTC m=+1105.668385806" watchObservedRunningTime="2025-11-23 07:08:41.971850668 +0000 UTC m=+1105.669255447" Nov 23 07:08:42 crc kubenswrapper[5028]: I1123 07:08:42.957992 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:42 crc kubenswrapper[5028]: I1123 07:08:42.958387 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:08:43 crc kubenswrapper[5028]: I1123 07:08:43.967331 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05065f0-2269-4a88-abdf-45d2523ac60b","Type":"ContainerStarted","Data":"22f3b63ee8932af549b2d73e2cdf2df0d790f7927e67756a4ac6bf3630f5b56b"} Nov 23 07:08:43 crc kubenswrapper[5028]: I1123 07:08:43.969443 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"595ec560-1f5a-44f8-bf67-feee6223a090","Type":"ContainerStarted","Data":"d82fd949fe89f76a065fd1501ba17537d165f20a736688a50648e098ab446adf"} Nov 23 07:08:43 crc kubenswrapper[5028]: I1123 07:08:43.988182 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.230829991 podStartE2EDuration="16.988161381s" podCreationTimestamp="2025-11-23 07:08:27 +0000 UTC" firstStartedPulling="2025-11-23 07:08:35.07992598 +0000 UTC m=+1098.777330769" lastFinishedPulling="2025-11-23 07:08:42.83725738 +0000 UTC m=+1106.534662159" observedRunningTime="2025-11-23 07:08:43.98316343 +0000 UTC m=+1107.680568239" watchObservedRunningTime="2025-11-23 07:08:43.988161381 +0000 UTC m=+1107.685566170" Nov 23 07:08:44 crc kubenswrapper[5028]: I1123 07:08:44.006898 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.051972732 podStartE2EDuration="20.006871186s" podCreationTimestamp="2025-11-23 07:08:24 +0000 UTC" firstStartedPulling="2025-11-23 07:08:34.89613214 +0000 UTC m=+1098.593536919" lastFinishedPulling="2025-11-23 07:08:42.851030604 +0000 UTC m=+1106.548435373" observedRunningTime="2025-11-23 07:08:44.000929652 +0000 UTC m=+1107.698334461" watchObservedRunningTime="2025-11-23 07:08:44.006871186 +0000 UTC m=+1107.704275975" Nov 23 07:08:44 crc kubenswrapper[5028]: I1123 07:08:44.235918 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 07:08:44 crc kubenswrapper[5028]: I1123 07:08:44.992591 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:08:45 crc kubenswrapper[5028]: I1123 07:08:45.452115 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:45 crc kubenswrapper[5028]: I1123 07:08:45.475524 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:08:45 crc kubenswrapper[5028]: I1123 07:08:45.527006 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-szt9d"] Nov 23 07:08:45 crc kubenswrapper[5028]: I1123 07:08:45.981603 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" containerID="cri-o://487521214eeb02d9de53ad55fcef5dea8d15920f39d4c5626968c7c0a735b745" gracePeriod=10 Nov 23 07:08:46 crc kubenswrapper[5028]: I1123 07:08:46.453642 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:46 crc kubenswrapper[5028]: I1123 07:08:46.491608 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:46 crc kubenswrapper[5028]: I1123 07:08:46.916991 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:46 crc kubenswrapper[5028]: I1123 07:08:46.956852 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:46 crc kubenswrapper[5028]: I1123 07:08:46.987911 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.022453 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.024843 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.275355 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-h4v95"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.277243 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.286569 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-h4v95"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.288153 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.358224 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lrqsm"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.369649 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.374017 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.376289 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lrqsm"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.407292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-ovsdbserver-nb\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.407347 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-config\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.407392 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dl4\" (UniqueName: \"kubernetes.io/projected/37ea829d-7928-49f5-9194-d4b2db144a3c-kube-api-access-k8dl4\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.407434 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-dns-svc\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.491641 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.492010 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.496977 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-h4v95"] Nov 23 07:08:47 crc kubenswrapper[5028]: E1123 07:08:47.504480 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-k8dl4 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-65c78595c5-h4v95" podUID="37ea829d-7928-49f5-9194-d4b2db144a3c" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510748 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-config\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510808 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-combined-ca-bundle\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510825 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovs-rundir\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510861 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dl4\" (UniqueName: \"kubernetes.io/projected/37ea829d-7928-49f5-9194-d4b2db144a3c-kube-api-access-k8dl4\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510895 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-config\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510924 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-dns-svc\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510942 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvs8\" (UniqueName: \"kubernetes.io/projected/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-kube-api-access-sxvs8\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.510982 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovn-rundir\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.511002 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.511056 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-ovsdbserver-nb\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.511985 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-ovsdbserver-nb\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.512510 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-config\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.513247 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-dns-svc\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.556998 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-sq7p4"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.558342 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.562817 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.565642 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dl4\" (UniqueName: \"kubernetes.io/projected/37ea829d-7928-49f5-9194-d4b2db144a3c-kube-api-access-k8dl4\") pod \"dnsmasq-dns-65c78595c5-h4v95\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.571307 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.613160 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxvs8\" (UniqueName: \"kubernetes.io/projected/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-kube-api-access-sxvs8\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.613214 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovn-rundir\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.613252 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.613382 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-combined-ca-bundle\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.613406 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovs-rundir\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.613503 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-config\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.614006 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovs-rundir\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.614230 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-config\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.618273 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovn-rundir\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.620101 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.629572 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.633716 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.635073 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.635525 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.637714 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ttdtg" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.644099 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-combined-ca-bundle\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.675558 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-sq7p4"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.684534 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.686573 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxvs8\" (UniqueName: \"kubernetes.io/projected/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-kube-api-access-sxvs8\") pod \"ovn-controller-metrics-lrqsm\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.692940 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723149 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-config\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723229 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723275 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723350 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-scripts\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723381 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-config\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723397 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pmq5\" (UniqueName: \"kubernetes.io/projected/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-kube-api-access-2pmq5\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723417 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723434 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723453 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fwvx\" (UniqueName: \"kubernetes.io/projected/b54011e0-a24e-4484-bf62-b89dd6c2633c-kube-api-access-4fwvx\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723772 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.723883 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-dns-svc\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.724056 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826221 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826767 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-scripts\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-config\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826828 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pmq5\" (UniqueName: \"kubernetes.io/projected/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-kube-api-access-2pmq5\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826854 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826879 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826909 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fwvx\" (UniqueName: \"kubernetes.io/projected/b54011e0-a24e-4484-bf62-b89dd6c2633c-kube-api-access-4fwvx\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826934 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826973 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-dns-svc\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.826997 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.827039 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-config\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.827080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.828377 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.829825 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-config\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.830226 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.830291 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.830746 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-dns-svc\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.832490 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-scripts\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.835994 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.836696 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-config\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.843182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.843274 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.844919 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pmq5\" (UniqueName: \"kubernetes.io/projected/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-kube-api-access-2pmq5\") pod \"ovn-northd-0\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " pod="openstack/ovn-northd-0" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.847646 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fwvx\" (UniqueName: \"kubernetes.io/projected/b54011e0-a24e-4484-bf62-b89dd6c2633c-kube-api-access-4fwvx\") pod \"dnsmasq-dns-5c7b6b5695-sq7p4\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.996710 5028 generic.go:334] "Generic (PLEG): container finished" podID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerID="487521214eeb02d9de53ad55fcef5dea8d15920f39d4c5626968c7c0a735b745" exitCode=0 Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.996821 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:47 crc kubenswrapper[5028]: I1123 07:08:47.996832 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" event={"ID":"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5","Type":"ContainerDied","Data":"487521214eeb02d9de53ad55fcef5dea8d15920f39d4c5626968c7c0a735b745"} Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.004354 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.120855 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.132189 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-config\") pod \"37ea829d-7928-49f5-9194-d4b2db144a3c\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.132543 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-config" (OuterVolumeSpecName: "config") pod "37ea829d-7928-49f5-9194-d4b2db144a3c" (UID: "37ea829d-7928-49f5-9194-d4b2db144a3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.132624 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8dl4\" (UniqueName: \"kubernetes.io/projected/37ea829d-7928-49f5-9194-d4b2db144a3c-kube-api-access-k8dl4\") pod \"37ea829d-7928-49f5-9194-d4b2db144a3c\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.133141 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-ovsdbserver-nb\") pod \"37ea829d-7928-49f5-9194-d4b2db144a3c\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.133192 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.133226 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-dns-svc\") pod \"37ea829d-7928-49f5-9194-d4b2db144a3c\" (UID: \"37ea829d-7928-49f5-9194-d4b2db144a3c\") " Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.133478 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37ea829d-7928-49f5-9194-d4b2db144a3c" (UID: "37ea829d-7928-49f5-9194-d4b2db144a3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.133765 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37ea829d-7928-49f5-9194-d4b2db144a3c" (UID: "37ea829d-7928-49f5-9194-d4b2db144a3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.134825 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.134844 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.134854 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37ea829d-7928-49f5-9194-d4b2db144a3c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.135116 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lrqsm"] Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.135199 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ea829d-7928-49f5-9194-d4b2db144a3c-kube-api-access-k8dl4" (OuterVolumeSpecName: "kube-api-access-k8dl4") pod "37ea829d-7928-49f5-9194-d4b2db144a3c" (UID: "37ea829d-7928-49f5-9194-d4b2db144a3c"). InnerVolumeSpecName "kube-api-access-k8dl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:08:48 crc kubenswrapper[5028]: W1123 07:08:48.143276 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2670a19_fe04_4055_905d_f9a6f8d8b0b3.slice/crio-4713cde39842a8fcea9276d63cc598206b60c53ed8dbf0457ffc7607bb3aa358 WatchSource:0}: Error finding container 4713cde39842a8fcea9276d63cc598206b60c53ed8dbf0457ffc7607bb3aa358: Status 404 returned error can't find the container with id 4713cde39842a8fcea9276d63cc598206b60c53ed8dbf0457ffc7607bb3aa358 Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.236706 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8dl4\" (UniqueName: \"kubernetes.io/projected/37ea829d-7928-49f5-9194-d4b2db144a3c-kube-api-access-k8dl4\") on node \"crc\" DevicePath \"\"" Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.596554 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:08:48 crc kubenswrapper[5028]: W1123 07:08:48.599698 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ed4379e_5b70_40bf_bc3b_7fc8d557e0d1.slice/crio-529e550bfb69799e339e1814672f0eaf2c52bca462b4e1d5c98b2da2af515ceb WatchSource:0}: Error finding container 529e550bfb69799e339e1814672f0eaf2c52bca462b4e1d5c98b2da2af515ceb: Status 404 returned error can't find the container with id 529e550bfb69799e339e1814672f0eaf2c52bca462b4e1d5c98b2da2af515ceb Nov 23 07:08:48 crc kubenswrapper[5028]: I1123 07:08:48.651710 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-sq7p4"] Nov 23 07:08:48 crc kubenswrapper[5028]: W1123 07:08:48.657765 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb54011e0_a24e_4484_bf62_b89dd6c2633c.slice/crio-b092223dc7ceba47bb590a650840ed53b92044218ffe283182d1ccfe2a8e99e3 WatchSource:0}: Error finding container b092223dc7ceba47bb590a650840ed53b92044218ffe283182d1ccfe2a8e99e3: Status 404 returned error can't find the container with id b092223dc7ceba47bb590a650840ed53b92044218ffe283182d1ccfe2a8e99e3 Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.006721 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lrqsm" event={"ID":"a2670a19-fe04-4055-905d-f9a6f8d8b0b3","Type":"ContainerStarted","Data":"4713cde39842a8fcea9276d63cc598206b60c53ed8dbf0457ffc7607bb3aa358"} Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.008158 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" event={"ID":"b54011e0-a24e-4484-bf62-b89dd6c2633c","Type":"ContainerStarted","Data":"b092223dc7ceba47bb590a650840ed53b92044218ffe283182d1ccfe2a8e99e3"} Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.010455 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c78595c5-h4v95" Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.010462 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1","Type":"ContainerStarted","Data":"529e550bfb69799e339e1814672f0eaf2c52bca462b4e1d5c98b2da2af515ceb"} Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.076828 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-h4v95"] Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.076881 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65c78595c5-h4v95"] Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.180870 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.180921 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.242910 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:49 crc kubenswrapper[5028]: I1123 07:08:49.990347 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: connect: connection refused" Nov 23 07:08:50 crc kubenswrapper[5028]: I1123 07:08:50.078543 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 07:08:51 crc kubenswrapper[5028]: I1123 07:08:51.062275 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ea829d-7928-49f5-9194-d4b2db144a3c" path="/var/lib/kubelet/pods/37ea829d-7928-49f5-9194-d4b2db144a3c/volumes" Nov 23 07:08:51 crc kubenswrapper[5028]: I1123 07:08:51.865189 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 07:08:51 crc kubenswrapper[5028]: I1123 07:08:51.886695 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-sq7p4"] Nov 23 07:08:51 crc kubenswrapper[5028]: I1123 07:08:51.932741 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf8bcbfcf-rbqg4"] Nov 23 07:08:51 crc kubenswrapper[5028]: I1123 07:08:51.936689 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:51 crc kubenswrapper[5028]: I1123 07:08:51.950471 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf8bcbfcf-rbqg4"] Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.014001 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4mp\" (UniqueName: \"kubernetes.io/projected/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-kube-api-access-nx4mp\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.014070 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-sb\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.014116 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-nb\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.014134 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-config\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.014161 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-dns-svc\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.115771 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4mp\" (UniqueName: \"kubernetes.io/projected/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-kube-api-access-nx4mp\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.115829 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-sb\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.115872 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-nb\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.115888 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-config\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.115916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-dns-svc\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.117003 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-config\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.117062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-sb\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.117657 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-nb\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.117657 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-dns-svc\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.135185 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4mp\" (UniqueName: \"kubernetes.io/projected/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-kube-api-access-nx4mp\") pod \"dnsmasq-dns-cf8bcbfcf-rbqg4\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.263994 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.670068 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf8bcbfcf-rbqg4"] Nov 23 07:08:52 crc kubenswrapper[5028]: W1123 07:08:52.676736 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d975fcb_e81a_4979_bd86_7d0f03c7d6fa.slice/crio-59c70017e6a2c3993cda7e0a65edb55915f5141b149bf58e68093c0a2c83e570 WatchSource:0}: Error finding container 59c70017e6a2c3993cda7e0a65edb55915f5141b149bf58e68093c0a2c83e570: Status 404 returned error can't find the container with id 59c70017e6a2c3993cda7e0a65edb55915f5141b149bf58e68093c0a2c83e570 Nov 23 07:08:52 crc kubenswrapper[5028]: I1123 07:08:52.994010 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:52.999887 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.003640 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.003666 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-t6mrr" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.004722 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.006753 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.013170 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.077710 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" event={"ID":"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa","Type":"ContainerStarted","Data":"59c70017e6a2c3993cda7e0a65edb55915f5141b149bf58e68093c0a2c83e570"} Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.131239 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvglf\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-kube-api-access-cvglf\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.131329 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-cache\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.131369 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-lock\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.131390 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.131517 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.233362 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvglf\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-kube-api-access-cvglf\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.233459 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-cache\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.233523 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-lock\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.233541 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.233591 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: E1123 07:08:53.233701 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:08:53 crc kubenswrapper[5028]: E1123 07:08:53.233714 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:08:53 crc kubenswrapper[5028]: E1123 07:08:53.233748 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:08:53.733734323 +0000 UTC m=+1117.431139102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : configmap "swift-ring-files" not found Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.234124 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.234632 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-lock\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.234783 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-cache\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.250290 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvglf\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-kube-api-access-cvglf\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.268908 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: I1123 07:08:53.745940 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:53 crc kubenswrapper[5028]: E1123 07:08:53.746327 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:08:53 crc kubenswrapper[5028]: E1123 07:08:53.746352 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:08:53 crc kubenswrapper[5028]: E1123 07:08:53.746412 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:08:54.746393218 +0000 UTC m=+1118.443797997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : configmap "swift-ring-files" not found Nov 23 07:08:54 crc kubenswrapper[5028]: I1123 07:08:54.763307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:54 crc kubenswrapper[5028]: E1123 07:08:54.763500 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:08:54 crc kubenswrapper[5028]: E1123 07:08:54.763801 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:08:54 crc kubenswrapper[5028]: E1123 07:08:54.763915 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:08:56.763882986 +0000 UTC m=+1120.461287805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : configmap "swift-ring-files" not found Nov 23 07:08:54 crc kubenswrapper[5028]: I1123 07:08:54.989589 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: connect: connection refused" Nov 23 07:08:55 crc kubenswrapper[5028]: I1123 07:08:55.074989 5028 generic.go:334] "Generic (PLEG): container finished" podID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerID="7a7ce5c4354970c819b37f23fbb63c64188e276738d60c06d2cb418c510e273c" exitCode=0 Nov 23 07:08:55 crc kubenswrapper[5028]: I1123 07:08:55.075035 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" event={"ID":"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa","Type":"ContainerDied","Data":"7a7ce5c4354970c819b37f23fbb63c64188e276738d60c06d2cb418c510e273c"} Nov 23 07:08:56 crc kubenswrapper[5028]: I1123 07:08:56.085900 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" event={"ID":"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa","Type":"ContainerStarted","Data":"6f76ca77e083f2b23faba275f2a28f92caf5697e1ac8a778a1cfc5fe208083b9"} Nov 23 07:08:56 crc kubenswrapper[5028]: I1123 07:08:56.815059 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:08:56 crc kubenswrapper[5028]: E1123 07:08:56.815284 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:08:56 crc kubenswrapper[5028]: E1123 07:08:56.815495 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:08:56 crc kubenswrapper[5028]: E1123 07:08:56.815627 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:09:00.815603561 +0000 UTC m=+1124.513008340 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : configmap "swift-ring-files" not found Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.014570 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lhkql"] Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.015605 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017343 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-ring-data-devices\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017376 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-combined-ca-bundle\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017429 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-dispersionconf\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017515 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-scripts\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017547 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kltp\" (UniqueName: \"kubernetes.io/projected/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-kube-api-access-8kltp\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017564 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-swiftconf\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.017597 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-etc-swift\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.021709 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.024868 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.030087 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lhkql"] Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.031304 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.095769 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119211 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kltp\" (UniqueName: \"kubernetes.io/projected/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-kube-api-access-8kltp\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119284 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-swiftconf\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119329 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-etc-swift\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119384 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-ring-data-devices\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119403 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-combined-ca-bundle\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119449 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-dispersionconf\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.119548 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-scripts\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.125455 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-scripts\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.129178 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-etc-swift\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.129428 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-ring-data-devices\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.135574 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-swiftconf\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.136340 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-combined-ca-bundle\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.141748 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kltp\" (UniqueName: \"kubernetes.io/projected/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-kube-api-access-8kltp\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.150961 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" podStartSLOduration=6.150927563 podStartE2EDuration="6.150927563s" podCreationTimestamp="2025-11-23 07:08:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:08:57.144729593 +0000 UTC m=+1120.842134392" watchObservedRunningTime="2025-11-23 07:08:57.150927563 +0000 UTC m=+1120.848332352" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.154633 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-dispersionconf\") pod \"swift-ring-rebalance-lhkql\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.331363 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:08:57 crc kubenswrapper[5028]: I1123 07:08:57.775140 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lhkql"] Nov 23 07:08:58 crc kubenswrapper[5028]: I1123 07:08:58.103932 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhkql" event={"ID":"7cfda6b9-44dc-4c93-9013-bf315a3bf92d","Type":"ContainerStarted","Data":"f49b90d003b65e989724b5b923cb44b3ac1f16cde5e51af1e19ef5fe6d079887"} Nov 23 07:08:58 crc kubenswrapper[5028]: I1123 07:08:58.209449 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 07:08:58 crc kubenswrapper[5028]: I1123 07:08:58.291552 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="galera" probeResult="failure" output=< Nov 23 07:08:58 crc kubenswrapper[5028]: wsrep_local_state_comment (Joined) differs from Synced Nov 23 07:08:58 crc kubenswrapper[5028]: > Nov 23 07:08:59 crc kubenswrapper[5028]: I1123 07:08:59.989338 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: connect: connection refused" Nov 23 07:08:59 crc kubenswrapper[5028]: I1123 07:08:59.989458 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:09:00 crc kubenswrapper[5028]: I1123 07:09:00.884680 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:09:00 crc kubenswrapper[5028]: E1123 07:09:00.885068 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:09:00 crc kubenswrapper[5028]: E1123 07:09:00.885281 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:09:00 crc kubenswrapper[5028]: E1123 07:09:00.885393 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:09:08.88535615 +0000 UTC m=+1132.582760969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : configmap "swift-ring-files" not found Nov 23 07:09:02 crc kubenswrapper[5028]: I1123 07:09:02.265158 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:09:02 crc kubenswrapper[5028]: I1123 07:09:02.310720 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-k6glv"] Nov 23 07:09:02 crc kubenswrapper[5028]: I1123 07:09:02.310932 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="dnsmasq-dns" containerID="cri-o://d5c510e8694c8063ad9c3383683fd4c6601677b404ce17dce2a337a4142de95c" gracePeriod=10 Nov 23 07:09:04 crc kubenswrapper[5028]: E1123 07:09:04.117329 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1882185583/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208" Nov 23 07:09:04 crc kubenswrapper[5028]: E1123 07:09:04.118068 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58fh5b8h656h5ch5bch68h5dh67fhfbh5bch685hb6h98h67fh559h97h66h564h654hbh67fh699h6dhb8h68fh6ch65fh679hdch654hcch5c5q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n578hdh5b9h56h5f5h5bdh67dh686h5cfh578h576hc5h78h565h8h77h558h74h548h7dh7h674h588h55hd5h6dh559h5ch6bh576h648h697q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:nc8h5fbh567h645h557h4h5b9hd7hcbh696h54h5c9h694h684h558hbh549h568hf6h5d9h644h5fdh79h5d9hcbh575h56h66fh59h54fh669h55cq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:nd9h5h697h5d8h7bh5c6h5d7h697h5b7hc5h5bdh699h686h8bh56fh65ch5bdh58bh696h5f7h95hbfh575h5ch58ch588h5fbhfch55h97h6ch597q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pmq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1882185583/1\": happened during read: context canceled" logger="UnhandledError" Nov 23 07:09:04 crc kubenswrapper[5028]: I1123 07:09:04.159566 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lrqsm" event={"ID":"a2670a19-fe04-4055-905d-f9a6f8d8b0b3","Type":"ContainerStarted","Data":"0c0eeb4c38181d669d383cb2ab7dff9f3df1057c5acd9ed1cce3af408d5c34a0"} Nov 23 07:09:04 crc kubenswrapper[5028]: I1123 07:09:04.162648 5028 generic.go:334] "Generic (PLEG): container finished" podID="b54011e0-a24e-4484-bf62-b89dd6c2633c" containerID="10175a332302cc416c2a22c49e13fdc01a479c012a53914e02402c97625e0fbd" exitCode=0 Nov 23 07:09:04 crc kubenswrapper[5028]: I1123 07:09:04.162773 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" event={"ID":"b54011e0-a24e-4484-bf62-b89dd6c2633c","Type":"ContainerDied","Data":"10175a332302cc416c2a22c49e13fdc01a479c012a53914e02402c97625e0fbd"} Nov 23 07:09:04 crc kubenswrapper[5028]: I1123 07:09:04.169403 5028 generic.go:334] "Generic (PLEG): container finished" podID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerID="d5c510e8694c8063ad9c3383683fd4c6601677b404ce17dce2a337a4142de95c" exitCode=0 Nov 23 07:09:04 crc kubenswrapper[5028]: I1123 07:09:04.169454 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" event={"ID":"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c","Type":"ContainerDied","Data":"d5c510e8694c8063ad9c3383683fd4c6601677b404ce17dce2a337a4142de95c"} Nov 23 07:09:04 crc kubenswrapper[5028]: I1123 07:09:04.214419 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lrqsm" podStartSLOduration=17.214400861 podStartE2EDuration="17.214400861s" podCreationTimestamp="2025-11-23 07:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:04.179769629 +0000 UTC m=+1127.877174418" watchObservedRunningTime="2025-11-23 07:09:04.214400861 +0000 UTC m=+1127.911805640" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.175133 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.180460 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.180656 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6b5695-sq7p4" event={"ID":"b54011e0-a24e-4484-bf62-b89dd6c2633c","Type":"ContainerDied","Data":"b092223dc7ceba47bb590a650840ed53b92044218ffe283182d1ccfe2a8e99e3"} Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.180692 5028 scope.go:117] "RemoveContainer" containerID="10175a332302cc416c2a22c49e13fdc01a479c012a53914e02402c97625e0fbd" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.360357 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fwvx\" (UniqueName: \"kubernetes.io/projected/b54011e0-a24e-4484-bf62-b89dd6c2633c-kube-api-access-4fwvx\") pod \"b54011e0-a24e-4484-bf62-b89dd6c2633c\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.360535 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-nb\") pod \"b54011e0-a24e-4484-bf62-b89dd6c2633c\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.360587 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-config\") pod \"b54011e0-a24e-4484-bf62-b89dd6c2633c\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.360639 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-dns-svc\") pod \"b54011e0-a24e-4484-bf62-b89dd6c2633c\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.360671 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-sb\") pod \"b54011e0-a24e-4484-bf62-b89dd6c2633c\" (UID: \"b54011e0-a24e-4484-bf62-b89dd6c2633c\") " Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.363839 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54011e0-a24e-4484-bf62-b89dd6c2633c-kube-api-access-4fwvx" (OuterVolumeSpecName: "kube-api-access-4fwvx") pod "b54011e0-a24e-4484-bf62-b89dd6c2633c" (UID: "b54011e0-a24e-4484-bf62-b89dd6c2633c"). InnerVolumeSpecName "kube-api-access-4fwvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.382613 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b54011e0-a24e-4484-bf62-b89dd6c2633c" (UID: "b54011e0-a24e-4484-bf62-b89dd6c2633c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.383387 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-config" (OuterVolumeSpecName: "config") pod "b54011e0-a24e-4484-bf62-b89dd6c2633c" (UID: "b54011e0-a24e-4484-bf62-b89dd6c2633c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.383430 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b54011e0-a24e-4484-bf62-b89dd6c2633c" (UID: "b54011e0-a24e-4484-bf62-b89dd6c2633c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.386695 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b54011e0-a24e-4484-bf62-b89dd6c2633c" (UID: "b54011e0-a24e-4484-bf62-b89dd6c2633c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.462204 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.462237 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.462288 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fwvx\" (UniqueName: \"kubernetes.io/projected/b54011e0-a24e-4484-bf62-b89dd6c2633c-kube-api-access-4fwvx\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.462302 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.462313 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54011e0-a24e-4484-bf62-b89dd6c2633c-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.543004 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-sq7p4"] Nov 23 07:09:05 crc kubenswrapper[5028]: I1123 07:09:05.549902 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6b5695-sq7p4"] Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.191178 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" event={"ID":"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5","Type":"ContainerDied","Data":"bf150b39b0fd113b6ad5caacf4578e88d50f115fb6ea839179c82a7be153f1fc"} Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.191540 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf150b39b0fd113b6ad5caacf4578e88d50f115fb6ea839179c82a7be153f1fc" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.201295 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" event={"ID":"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c","Type":"ContainerDied","Data":"72e2b3770f88a9a22a16608ec02cf03f9128aa9987dd47abf93b9f5a1a55df06"} Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.201336 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e2b3770f88a9a22a16608ec02cf03f9128aa9987dd47abf93b9f5a1a55df06" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.445893 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.475993 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.480706 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-config\") pod \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.481209 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-dns-svc\") pod \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.481403 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-config\") pod \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.481575 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-dns-svc\") pod \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.481746 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz4pm\" (UniqueName: \"kubernetes.io/projected/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-kube-api-access-mz4pm\") pod \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\" (UID: \"c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c\") " Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.489640 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f9mf\" (UniqueName: \"kubernetes.io/projected/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-kube-api-access-9f9mf\") pod \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\" (UID: \"ffdc443a-4ba3-4b2b-9966-be5bcbf037d5\") " Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.494291 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-kube-api-access-9f9mf" (OuterVolumeSpecName: "kube-api-access-9f9mf") pod "ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" (UID: "ffdc443a-4ba3-4b2b-9966-be5bcbf037d5"). InnerVolumeSpecName "kube-api-access-9f9mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.518000 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-kube-api-access-mz4pm" (OuterVolumeSpecName: "kube-api-access-mz4pm") pod "c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" (UID: "c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c"). InnerVolumeSpecName "kube-api-access-mz4pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.559455 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-config" (OuterVolumeSpecName: "config") pod "c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" (UID: "c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.561375 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" (UID: "ffdc443a-4ba3-4b2b-9966-be5bcbf037d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.562675 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" (UID: "c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.563210 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-config" (OuterVolumeSpecName: "config") pod "ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" (UID: "ffdc443a-4ba3-4b2b-9966-be5bcbf037d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:06 crc kubenswrapper[5028]: E1123 07:09:06.574279 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1882185583/1\\\": happened during read: context canceled\"" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.593600 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f9mf\" (UniqueName: \"kubernetes.io/projected/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-kube-api-access-9f9mf\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.593637 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.593651 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.593663 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.593674 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:06 crc kubenswrapper[5028]: I1123 07:09:06.593688 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz4pm\" (UniqueName: \"kubernetes.io/projected/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c-kube-api-access-mz4pm\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.079827 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54011e0-a24e-4484-bf62-b89dd6c2633c" path="/var/lib/kubelet/pods/b54011e0-a24e-4484-bf62-b89dd6c2633c/volumes" Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.210885 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1","Type":"ContainerStarted","Data":"1ff78194c0b6fef4a35d4ce365c5ce7a085c94cc689d51e542e7782f680a7d0a"} Nov 23 07:09:07 crc kubenswrapper[5028]: E1123 07:09:07.212433 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208\\\"\"" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.214125 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.214119 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhkql" event={"ID":"7cfda6b9-44dc-4c93-9013-bf315a3bf92d","Type":"ContainerStarted","Data":"9d025670f0758890925706d019dec0c58ba40c04e9ce8ae615b3bedf7b0255db"} Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.214754 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.246497 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-k6glv"] Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.251768 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-k6glv"] Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.270240 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lhkql" podStartSLOduration=2.865031959 podStartE2EDuration="11.270183087s" podCreationTimestamp="2025-11-23 07:08:56 +0000 UTC" firstStartedPulling="2025-11-23 07:08:57.784875426 +0000 UTC m=+1121.482280205" lastFinishedPulling="2025-11-23 07:09:06.190026554 +0000 UTC m=+1129.887431333" observedRunningTime="2025-11-23 07:09:07.256094314 +0000 UTC m=+1130.953499093" watchObservedRunningTime="2025-11-23 07:09:07.270183087 +0000 UTC m=+1130.967587876" Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.281411 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-szt9d"] Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.291274 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-szt9d"] Nov 23 07:09:07 crc kubenswrapper[5028]: I1123 07:09:07.555476 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.228839 5028 generic.go:334] "Generic (PLEG): container finished" podID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerID="8d15922df2cca35d78979c23ab251a4e5ad6c02f4fa23139d560e3bd174d432a" exitCode=0 Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.228936 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8399afb1-fbd2-4ce0-b980-46b317d6cfee","Type":"ContainerDied","Data":"8d15922df2cca35d78979c23ab251a4e5ad6c02f4fa23139d560e3bd174d432a"} Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.232606 5028 generic.go:334] "Generic (PLEG): container finished" podID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerID="5a0f902d9eb5361838184035de7e15282cacda3c6de606d046603750a68274e8" exitCode=0 Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.232697 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7","Type":"ContainerDied","Data":"5a0f902d9eb5361838184035de7e15282cacda3c6de606d046603750a68274e8"} Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.234143 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:3e4ecc02b4b5e0860482a93599ba9ca598c5ce26c093c46e701f96fe51acb208\\\"\"" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.818830 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-eb1f-account-create-2qccz"] Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.819564 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.819677 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.819767 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54011e0-a24e-4484-bf62-b89dd6c2633c" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.819841 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54011e0-a24e-4484-bf62-b89dd6c2633c" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.819926 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.820047 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.820143 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.820224 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.820312 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="dnsmasq-dns" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.820389 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="dnsmasq-dns" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.820654 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.820748 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54011e0-a24e-4484-bf62-b89dd6c2633c" containerName="init" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.820839 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="dnsmasq-dns" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.821572 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.826682 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eb1f-account-create-2qccz"] Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.830822 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.863363 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-fxq6t"] Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.865027 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.880835 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fxq6t"] Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.939043 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc4e92e6-843f-4963-b659-fe67d1c71c8b-operator-scripts\") pod \"keystone-eb1f-account-create-2qccz\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.939867 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:09:08 crc kubenswrapper[5028]: I1123 07:09:08.940043 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xrd\" (UniqueName: \"kubernetes.io/projected/fc4e92e6-843f-4963-b659-fe67d1c71c8b-kube-api-access-49xrd\") pod \"keystone-eb1f-account-create-2qccz\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.940425 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.940520 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 23 07:09:08 crc kubenswrapper[5028]: E1123 07:09:08.940631 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:09:24.94061061 +0000 UTC m=+1148.638015389 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : configmap "swift-ring-files" not found Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.041873 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfkl\" (UniqueName: \"kubernetes.io/projected/1f60db75-6beb-411d-afad-7841174fbf40-kube-api-access-tmfkl\") pod \"keystone-db-create-fxq6t\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.042088 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc4e92e6-843f-4963-b659-fe67d1c71c8b-operator-scripts\") pod \"keystone-eb1f-account-create-2qccz\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.042125 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f60db75-6beb-411d-afad-7841174fbf40-operator-scripts\") pod \"keystone-db-create-fxq6t\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.042195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xrd\" (UniqueName: \"kubernetes.io/projected/fc4e92e6-843f-4963-b659-fe67d1c71c8b-kube-api-access-49xrd\") pod \"keystone-eb1f-account-create-2qccz\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.042866 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc4e92e6-843f-4963-b659-fe67d1c71c8b-operator-scripts\") pod \"keystone-eb1f-account-create-2qccz\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.062372 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" path="/var/lib/kubelet/pods/c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c/volumes" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.064714 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" path="/var/lib/kubelet/pods/ffdc443a-4ba3-4b2b-9966-be5bcbf037d5/volumes" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.068566 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xrd\" (UniqueName: \"kubernetes.io/projected/fc4e92e6-843f-4963-b659-fe67d1c71c8b-kube-api-access-49xrd\") pod \"keystone-eb1f-account-create-2qccz\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.105898 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-tgkzz"] Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.107286 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.116278 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tgkzz"] Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.143596 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfkl\" (UniqueName: \"kubernetes.io/projected/1f60db75-6beb-411d-afad-7841174fbf40-kube-api-access-tmfkl\") pod \"keystone-db-create-fxq6t\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.143927 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766fn\" (UniqueName: \"kubernetes.io/projected/6aa758fb-409d-4900-91b5-a424479b614e-kube-api-access-766fn\") pod \"placement-db-create-tgkzz\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.144139 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa758fb-409d-4900-91b5-a424479b614e-operator-scripts\") pod \"placement-db-create-tgkzz\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.144298 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f60db75-6beb-411d-afad-7841174fbf40-operator-scripts\") pod \"keystone-db-create-fxq6t\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.145081 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f60db75-6beb-411d-afad-7841174fbf40-operator-scripts\") pod \"keystone-db-create-fxq6t\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.159779 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfkl\" (UniqueName: \"kubernetes.io/projected/1f60db75-6beb-411d-afad-7841174fbf40-kube-api-access-tmfkl\") pod \"keystone-db-create-fxq6t\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.173637 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.193601 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.213052 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-31f4-account-create-nm5fp"] Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.214107 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.219309 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.221692 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-31f4-account-create-nm5fp"] Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.245577 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ftlx\" (UniqueName: \"kubernetes.io/projected/4e88bac4-6433-4c6a-a36f-433aec6c760c-kube-api-access-8ftlx\") pod \"placement-31f4-account-create-nm5fp\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.246868 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766fn\" (UniqueName: \"kubernetes.io/projected/6aa758fb-409d-4900-91b5-a424479b614e-kube-api-access-766fn\") pod \"placement-db-create-tgkzz\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.247036 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa758fb-409d-4900-91b5-a424479b614e-operator-scripts\") pod \"placement-db-create-tgkzz\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.247143 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e88bac4-6433-4c6a-a36f-433aec6c760c-operator-scripts\") pod \"placement-31f4-account-create-nm5fp\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.248764 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa758fb-409d-4900-91b5-a424479b614e-operator-scripts\") pod \"placement-db-create-tgkzz\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.270534 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766fn\" (UniqueName: \"kubernetes.io/projected/6aa758fb-409d-4900-91b5-a424479b614e-kube-api-access-766fn\") pod \"placement-db-create-tgkzz\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.280342 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8399afb1-fbd2-4ce0-b980-46b317d6cfee","Type":"ContainerStarted","Data":"54f05a9f66c1cff1703e9f91ab40df9d1d549e086aad84a05f1c7861710e604f"} Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.281428 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.286251 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7","Type":"ContainerStarted","Data":"a956eb2eee86d43afc41626c37352de689349467bd476d6a7ecbf7c28a1afb07"} Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.287076 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.311775 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.938258815 podStartE2EDuration="55.311760184s" podCreationTimestamp="2025-11-23 07:08:14 +0000 UTC" firstStartedPulling="2025-11-23 07:08:16.786020176 +0000 UTC m=+1080.483424955" lastFinishedPulling="2025-11-23 07:08:34.159521545 +0000 UTC m=+1097.856926324" observedRunningTime="2025-11-23 07:09:09.307109151 +0000 UTC m=+1133.004513950" watchObservedRunningTime="2025-11-23 07:09:09.311760184 +0000 UTC m=+1133.009164963" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.365883 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ftlx\" (UniqueName: \"kubernetes.io/projected/4e88bac4-6433-4c6a-a36f-433aec6c760c-kube-api-access-8ftlx\") pod \"placement-31f4-account-create-nm5fp\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.366627 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e88bac4-6433-4c6a-a36f-433aec6c760c-operator-scripts\") pod \"placement-31f4-account-create-nm5fp\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.369304 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e88bac4-6433-4c6a-a36f-433aec6c760c-operator-scripts\") pod \"placement-31f4-account-create-nm5fp\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.424399 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ftlx\" (UniqueName: \"kubernetes.io/projected/4e88bac4-6433-4c6a-a36f-433aec6c760c-kube-api-access-8ftlx\") pod \"placement-31f4-account-create-nm5fp\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.426796 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.531507 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.739032 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.909093604 podStartE2EDuration="54.739012052s" podCreationTimestamp="2025-11-23 07:08:15 +0000 UTC" firstStartedPulling="2025-11-23 07:08:17.315194068 +0000 UTC m=+1081.012598847" lastFinishedPulling="2025-11-23 07:08:34.145112496 +0000 UTC m=+1097.842517295" observedRunningTime="2025-11-23 07:09:09.350396534 +0000 UTC m=+1133.047801323" watchObservedRunningTime="2025-11-23 07:09:09.739012052 +0000 UTC m=+1133.436416831" Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.745160 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-eb1f-account-create-2qccz"] Nov 23 07:09:09 crc kubenswrapper[5028]: W1123 07:09:09.756293 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc4e92e6_843f_4963_b659_fe67d1c71c8b.slice/crio-cc71b3a9c3d80388f9fafbdcefd73df7fb33a6ab64a8491b096a75b0f977f0a3 WatchSource:0}: Error finding container cc71b3a9c3d80388f9fafbdcefd73df7fb33a6ab64a8491b096a75b0f977f0a3: Status 404 returned error can't find the container with id cc71b3a9c3d80388f9fafbdcefd73df7fb33a6ab64a8491b096a75b0f977f0a3 Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.813271 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-fxq6t"] Nov 23 07:09:09 crc kubenswrapper[5028]: W1123 07:09:09.817734 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f60db75_6beb_411d_afad_7841174fbf40.slice/crio-62519270a5da15c24a00adf663d59f29bb518cf3b1f7ebe0189d235f7467773d WatchSource:0}: Error finding container 62519270a5da15c24a00adf663d59f29bb518cf3b1f7ebe0189d235f7467773d: Status 404 returned error can't find the container with id 62519270a5da15c24a00adf663d59f29bb518cf3b1f7ebe0189d235f7467773d Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.922595 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-tgkzz"] Nov 23 07:09:09 crc kubenswrapper[5028]: I1123 07:09:09.990194 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c6d9948dc-szt9d" podUID="ffdc443a-4ba3-4b2b-9966-be5bcbf037d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.101:5353: i/o timeout" Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.036304 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-31f4-account-create-nm5fp"] Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.296298 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-31f4-account-create-nm5fp" event={"ID":"4e88bac4-6433-4c6a-a36f-433aec6c760c","Type":"ContainerStarted","Data":"7cfb30b9c3f18728879f8478ba9936131bc7e83d3a5d60433da19a0ecea1feb7"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.296359 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-31f4-account-create-nm5fp" event={"ID":"4e88bac4-6433-4c6a-a36f-433aec6c760c","Type":"ContainerStarted","Data":"a4dd3841e21ef3bfbc6f79c5078dd7da5f47e172c21cbf74c2283429dad35aac"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.297618 5028 generic.go:334] "Generic (PLEG): container finished" podID="fc4e92e6-843f-4963-b659-fe67d1c71c8b" containerID="9f197175460f75323e366d22e7ffbd1000cbc5540ba72dc6c4f24628ee23a4c6" exitCode=0 Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.297690 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eb1f-account-create-2qccz" event={"ID":"fc4e92e6-843f-4963-b659-fe67d1c71c8b","Type":"ContainerDied","Data":"9f197175460f75323e366d22e7ffbd1000cbc5540ba72dc6c4f24628ee23a4c6"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.297718 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eb1f-account-create-2qccz" event={"ID":"fc4e92e6-843f-4963-b659-fe67d1c71c8b","Type":"ContainerStarted","Data":"cc71b3a9c3d80388f9fafbdcefd73df7fb33a6ab64a8491b096a75b0f977f0a3"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.300655 5028 generic.go:334] "Generic (PLEG): container finished" podID="1f60db75-6beb-411d-afad-7841174fbf40" containerID="cac29302621a7fbc9644c1dac717103cb9daabdefd6fe800f446150bd88ee23b" exitCode=0 Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.300733 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fxq6t" event={"ID":"1f60db75-6beb-411d-afad-7841174fbf40","Type":"ContainerDied","Data":"cac29302621a7fbc9644c1dac717103cb9daabdefd6fe800f446150bd88ee23b"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.300909 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fxq6t" event={"ID":"1f60db75-6beb-411d-afad-7841174fbf40","Type":"ContainerStarted","Data":"62519270a5da15c24a00adf663d59f29bb518cf3b1f7ebe0189d235f7467773d"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.303599 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tgkzz" event={"ID":"6aa758fb-409d-4900-91b5-a424479b614e","Type":"ContainerStarted","Data":"59ca6f41da174e8068bb5c1fa9541bd862fe411625339888ca7f7ff2ab63b9ef"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.303628 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tgkzz" event={"ID":"6aa758fb-409d-4900-91b5-a424479b614e","Type":"ContainerStarted","Data":"5b17c4fc2340a125b4de0feaccba1b2d5c3d0c6c85551615d722cbbe2ae2ca2c"} Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.321666 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-31f4-account-create-nm5fp" podStartSLOduration=1.321649608 podStartE2EDuration="1.321649608s" podCreationTimestamp="2025-11-23 07:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:10.315988611 +0000 UTC m=+1134.013393400" watchObservedRunningTime="2025-11-23 07:09:10.321649608 +0000 UTC m=+1134.019054387" Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.349768 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7xfsr" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:09:10 crc kubenswrapper[5028]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 07:09:10 crc kubenswrapper[5028]: > Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.353984 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-tgkzz" podStartSLOduration=1.353942113 podStartE2EDuration="1.353942113s" podCreationTimestamp="2025-11-23 07:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:10.348763418 +0000 UTC m=+1134.046168197" watchObservedRunningTime="2025-11-23 07:09:10.353942113 +0000 UTC m=+1134.051346892" Nov 23 07:09:10 crc kubenswrapper[5028]: I1123 07:09:10.474768 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6486446b9f-k6glv" podUID="c21cbae7-6fa0-46d8-b634-1bbcbedd2c4c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.102:5353: i/o timeout" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.309964 5028 generic.go:334] "Generic (PLEG): container finished" podID="4e88bac4-6433-4c6a-a36f-433aec6c760c" containerID="7cfb30b9c3f18728879f8478ba9936131bc7e83d3a5d60433da19a0ecea1feb7" exitCode=0 Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.310006 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-31f4-account-create-nm5fp" event={"ID":"4e88bac4-6433-4c6a-a36f-433aec6c760c","Type":"ContainerDied","Data":"7cfb30b9c3f18728879f8478ba9936131bc7e83d3a5d60433da19a0ecea1feb7"} Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.311606 5028 generic.go:334] "Generic (PLEG): container finished" podID="6aa758fb-409d-4900-91b5-a424479b614e" containerID="59ca6f41da174e8068bb5c1fa9541bd862fe411625339888ca7f7ff2ab63b9ef" exitCode=0 Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.311664 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tgkzz" event={"ID":"6aa758fb-409d-4900-91b5-a424479b614e","Type":"ContainerDied","Data":"59ca6f41da174e8068bb5c1fa9541bd862fe411625339888ca7f7ff2ab63b9ef"} Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.714536 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.721660 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.808667 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f60db75-6beb-411d-afad-7841174fbf40-operator-scripts\") pod \"1f60db75-6beb-411d-afad-7841174fbf40\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.808749 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xrd\" (UniqueName: \"kubernetes.io/projected/fc4e92e6-843f-4963-b659-fe67d1c71c8b-kube-api-access-49xrd\") pod \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.808784 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc4e92e6-843f-4963-b659-fe67d1c71c8b-operator-scripts\") pod \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\" (UID: \"fc4e92e6-843f-4963-b659-fe67d1c71c8b\") " Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.808827 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmfkl\" (UniqueName: \"kubernetes.io/projected/1f60db75-6beb-411d-afad-7841174fbf40-kube-api-access-tmfkl\") pod \"1f60db75-6beb-411d-afad-7841174fbf40\" (UID: \"1f60db75-6beb-411d-afad-7841174fbf40\") " Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.809480 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f60db75-6beb-411d-afad-7841174fbf40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f60db75-6beb-411d-afad-7841174fbf40" (UID: "1f60db75-6beb-411d-afad-7841174fbf40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.810465 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc4e92e6-843f-4963-b659-fe67d1c71c8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc4e92e6-843f-4963-b659-fe67d1c71c8b" (UID: "fc4e92e6-843f-4963-b659-fe67d1c71c8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.814617 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f60db75-6beb-411d-afad-7841174fbf40-kube-api-access-tmfkl" (OuterVolumeSpecName: "kube-api-access-tmfkl") pod "1f60db75-6beb-411d-afad-7841174fbf40" (UID: "1f60db75-6beb-411d-afad-7841174fbf40"). InnerVolumeSpecName "kube-api-access-tmfkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.815247 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4e92e6-843f-4963-b659-fe67d1c71c8b-kube-api-access-49xrd" (OuterVolumeSpecName: "kube-api-access-49xrd") pod "fc4e92e6-843f-4963-b659-fe67d1c71c8b" (UID: "fc4e92e6-843f-4963-b659-fe67d1c71c8b"). InnerVolumeSpecName "kube-api-access-49xrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.910733 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmfkl\" (UniqueName: \"kubernetes.io/projected/1f60db75-6beb-411d-afad-7841174fbf40-kube-api-access-tmfkl\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.911042 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f60db75-6beb-411d-afad-7841174fbf40-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.911131 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xrd\" (UniqueName: \"kubernetes.io/projected/fc4e92e6-843f-4963-b659-fe67d1c71c8b-kube-api-access-49xrd\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:11 crc kubenswrapper[5028]: I1123 07:09:11.911202 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc4e92e6-843f-4963-b659-fe67d1c71c8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.321654 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-eb1f-account-create-2qccz" event={"ID":"fc4e92e6-843f-4963-b659-fe67d1c71c8b","Type":"ContainerDied","Data":"cc71b3a9c3d80388f9fafbdcefd73df7fb33a6ab64a8491b096a75b0f977f0a3"} Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.322997 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc71b3a9c3d80388f9fafbdcefd73df7fb33a6ab64a8491b096a75b0f977f0a3" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.322426 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-eb1f-account-create-2qccz" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.324281 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-fxq6t" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.324419 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-fxq6t" event={"ID":"1f60db75-6beb-411d-afad-7841174fbf40","Type":"ContainerDied","Data":"62519270a5da15c24a00adf663d59f29bb518cf3b1f7ebe0189d235f7467773d"} Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.324494 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62519270a5da15c24a00adf663d59f29bb518cf3b1f7ebe0189d235f7467773d" Nov 23 07:09:12 crc kubenswrapper[5028]: E1123 07:09:12.511334 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f60db75_6beb_411d_afad_7841174fbf40.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc4e92e6_843f_4963_b659_fe67d1c71c8b.slice\": RecentStats: unable to find data in memory cache]" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.600864 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.666314 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.727938 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e88bac4-6433-4c6a-a36f-433aec6c760c-operator-scripts\") pod \"4e88bac4-6433-4c6a-a36f-433aec6c760c\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.728102 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ftlx\" (UniqueName: \"kubernetes.io/projected/4e88bac4-6433-4c6a-a36f-433aec6c760c-kube-api-access-8ftlx\") pod \"4e88bac4-6433-4c6a-a36f-433aec6c760c\" (UID: \"4e88bac4-6433-4c6a-a36f-433aec6c760c\") " Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.728165 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa758fb-409d-4900-91b5-a424479b614e-operator-scripts\") pod \"6aa758fb-409d-4900-91b5-a424479b614e\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.728246 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-766fn\" (UniqueName: \"kubernetes.io/projected/6aa758fb-409d-4900-91b5-a424479b614e-kube-api-access-766fn\") pod \"6aa758fb-409d-4900-91b5-a424479b614e\" (UID: \"6aa758fb-409d-4900-91b5-a424479b614e\") " Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.728423 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e88bac4-6433-4c6a-a36f-433aec6c760c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e88bac4-6433-4c6a-a36f-433aec6c760c" (UID: "4e88bac4-6433-4c6a-a36f-433aec6c760c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.728599 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e88bac4-6433-4c6a-a36f-433aec6c760c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.729191 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa758fb-409d-4900-91b5-a424479b614e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6aa758fb-409d-4900-91b5-a424479b614e" (UID: "6aa758fb-409d-4900-91b5-a424479b614e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.731725 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e88bac4-6433-4c6a-a36f-433aec6c760c-kube-api-access-8ftlx" (OuterVolumeSpecName: "kube-api-access-8ftlx") pod "4e88bac4-6433-4c6a-a36f-433aec6c760c" (UID: "4e88bac4-6433-4c6a-a36f-433aec6c760c"). InnerVolumeSpecName "kube-api-access-8ftlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.738167 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa758fb-409d-4900-91b5-a424479b614e-kube-api-access-766fn" (OuterVolumeSpecName: "kube-api-access-766fn") pod "6aa758fb-409d-4900-91b5-a424479b614e" (UID: "6aa758fb-409d-4900-91b5-a424479b614e"). InnerVolumeSpecName "kube-api-access-766fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.830126 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ftlx\" (UniqueName: \"kubernetes.io/projected/4e88bac4-6433-4c6a-a36f-433aec6c760c-kube-api-access-8ftlx\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.830169 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6aa758fb-409d-4900-91b5-a424479b614e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:12 crc kubenswrapper[5028]: I1123 07:09:12.830185 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-766fn\" (UniqueName: \"kubernetes.io/projected/6aa758fb-409d-4900-91b5-a424479b614e-kube-api-access-766fn\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:13 crc kubenswrapper[5028]: I1123 07:09:13.332400 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-31f4-account-create-nm5fp" event={"ID":"4e88bac4-6433-4c6a-a36f-433aec6c760c","Type":"ContainerDied","Data":"a4dd3841e21ef3bfbc6f79c5078dd7da5f47e172c21cbf74c2283429dad35aac"} Nov 23 07:09:13 crc kubenswrapper[5028]: I1123 07:09:13.332458 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4dd3841e21ef3bfbc6f79c5078dd7da5f47e172c21cbf74c2283429dad35aac" Nov 23 07:09:13 crc kubenswrapper[5028]: I1123 07:09:13.332578 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-31f4-account-create-nm5fp" Nov 23 07:09:13 crc kubenswrapper[5028]: I1123 07:09:13.333719 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-tgkzz" event={"ID":"6aa758fb-409d-4900-91b5-a424479b614e","Type":"ContainerDied","Data":"5b17c4fc2340a125b4de0feaccba1b2d5c3d0c6c85551615d722cbbe2ae2ca2c"} Nov 23 07:09:13 crc kubenswrapper[5028]: I1123 07:09:13.333739 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b17c4fc2340a125b4de0feaccba1b2d5c3d0c6c85551615d722cbbe2ae2ca2c" Nov 23 07:09:13 crc kubenswrapper[5028]: I1123 07:09:13.333796 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-tgkzz" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.341674 5028 generic.go:334] "Generic (PLEG): container finished" podID="7cfda6b9-44dc-4c93-9013-bf315a3bf92d" containerID="9d025670f0758890925706d019dec0c58ba40c04e9ce8ae615b3bedf7b0255db" exitCode=0 Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.341891 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhkql" event={"ID":"7cfda6b9-44dc-4c93-9013-bf315a3bf92d","Type":"ContainerDied","Data":"9d025670f0758890925706d019dec0c58ba40c04e9ce8ae615b3bedf7b0255db"} Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363175 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-snkp6"] Nov 23 07:09:14 crc kubenswrapper[5028]: E1123 07:09:14.363498 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa758fb-409d-4900-91b5-a424479b614e" containerName="mariadb-database-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363510 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa758fb-409d-4900-91b5-a424479b614e" containerName="mariadb-database-create" Nov 23 07:09:14 crc kubenswrapper[5028]: E1123 07:09:14.363534 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4e92e6-843f-4963-b659-fe67d1c71c8b" containerName="mariadb-account-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363542 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4e92e6-843f-4963-b659-fe67d1c71c8b" containerName="mariadb-account-create" Nov 23 07:09:14 crc kubenswrapper[5028]: E1123 07:09:14.363553 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f60db75-6beb-411d-afad-7841174fbf40" containerName="mariadb-database-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363559 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f60db75-6beb-411d-afad-7841174fbf40" containerName="mariadb-database-create" Nov 23 07:09:14 crc kubenswrapper[5028]: E1123 07:09:14.363570 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e88bac4-6433-4c6a-a36f-433aec6c760c" containerName="mariadb-account-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363575 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e88bac4-6433-4c6a-a36f-433aec6c760c" containerName="mariadb-account-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363815 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f60db75-6beb-411d-afad-7841174fbf40" containerName="mariadb-database-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363831 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc4e92e6-843f-4963-b659-fe67d1c71c8b" containerName="mariadb-account-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363850 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa758fb-409d-4900-91b5-a424479b614e" containerName="mariadb-database-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.363857 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e88bac4-6433-4c6a-a36f-433aec6c760c" containerName="mariadb-account-create" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.364380 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.380051 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-snkp6"] Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.455854 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drx8k\" (UniqueName: \"kubernetes.io/projected/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-kube-api-access-drx8k\") pod \"glance-db-create-snkp6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.455962 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-operator-scripts\") pod \"glance-db-create-snkp6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.469828 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8cc3-account-create-kq8vp"] Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.470825 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.473397 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.479126 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8cc3-account-create-kq8vp"] Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.557622 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bfc63d-fbf6-48be-b850-a3370894112b-operator-scripts\") pod \"glance-8cc3-account-create-kq8vp\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.558070 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drx8k\" (UniqueName: \"kubernetes.io/projected/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-kube-api-access-drx8k\") pod \"glance-db-create-snkp6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.558238 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5q5\" (UniqueName: \"kubernetes.io/projected/a0bfc63d-fbf6-48be-b850-a3370894112b-kube-api-access-sd5q5\") pod \"glance-8cc3-account-create-kq8vp\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.558427 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-operator-scripts\") pod \"glance-db-create-snkp6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.559201 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-operator-scripts\") pod \"glance-db-create-snkp6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.575475 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drx8k\" (UniqueName: \"kubernetes.io/projected/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-kube-api-access-drx8k\") pod \"glance-db-create-snkp6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.659819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd5q5\" (UniqueName: \"kubernetes.io/projected/a0bfc63d-fbf6-48be-b850-a3370894112b-kube-api-access-sd5q5\") pod \"glance-8cc3-account-create-kq8vp\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.659927 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bfc63d-fbf6-48be-b850-a3370894112b-operator-scripts\") pod \"glance-8cc3-account-create-kq8vp\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.660583 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bfc63d-fbf6-48be-b850-a3370894112b-operator-scripts\") pod \"glance-8cc3-account-create-kq8vp\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.679652 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd5q5\" (UniqueName: \"kubernetes.io/projected/a0bfc63d-fbf6-48be-b850-a3370894112b-kube-api-access-sd5q5\") pod \"glance-8cc3-account-create-kq8vp\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.680642 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snkp6" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.787339 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:14 crc kubenswrapper[5028]: I1123 07:09:14.943042 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-snkp6"] Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.278392 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8cc3-account-create-kq8vp"] Nov 23 07:09:15 crc kubenswrapper[5028]: W1123 07:09:15.284854 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0bfc63d_fbf6_48be_b850_a3370894112b.slice/crio-2e4647221a8aa74ae1c1f68b496382b5e02c5b5cd74af3fee0dad16764db78d3 WatchSource:0}: Error finding container 2e4647221a8aa74ae1c1f68b496382b5e02c5b5cd74af3fee0dad16764db78d3: Status 404 returned error can't find the container with id 2e4647221a8aa74ae1c1f68b496382b5e02c5b5cd74af3fee0dad16764db78d3 Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.351654 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7xfsr" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:09:15 crc kubenswrapper[5028]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 23 07:09:15 crc kubenswrapper[5028]: > Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.352582 5028 generic.go:334] "Generic (PLEG): container finished" podID="b9255961-4bb2-4ffc-af3d-9fd8998c59d6" containerID="46c8fdaa953f4356184a9b835cf0e50171a5139cfaf43a971d468ac33651df8f" exitCode=0 Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.352663 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snkp6" event={"ID":"b9255961-4bb2-4ffc-af3d-9fd8998c59d6","Type":"ContainerDied","Data":"46c8fdaa953f4356184a9b835cf0e50171a5139cfaf43a971d468ac33651df8f"} Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.352691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snkp6" event={"ID":"b9255961-4bb2-4ffc-af3d-9fd8998c59d6","Type":"ContainerStarted","Data":"a1c94d8943548cac755ce528394498812ee7925254e1319b357ac8bdba237416"} Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.354212 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cc3-account-create-kq8vp" event={"ID":"a0bfc63d-fbf6-48be-b850-a3370894112b","Type":"ContainerStarted","Data":"2e4647221a8aa74ae1c1f68b496382b5e02c5b5cd74af3fee0dad16764db78d3"} Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.409917 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.429262 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.631131 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7xfsr-config-wtknh"] Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.632805 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.635026 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.643793 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xfsr-config-wtknh"] Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.719796 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782551 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-combined-ca-bundle\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782600 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-dispersionconf\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782670 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-ring-data-devices\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782715 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kltp\" (UniqueName: \"kubernetes.io/projected/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-kube-api-access-8kltp\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782764 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-scripts\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782808 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-etc-swift\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.782872 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-swiftconf\") pod \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\" (UID: \"7cfda6b9-44dc-4c93-9013-bf315a3bf92d\") " Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783073 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-log-ovn\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783101 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-scripts\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783159 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-additional-scripts\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783198 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783257 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtpv\" (UniqueName: \"kubernetes.io/projected/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-kube-api-access-bjtpv\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783327 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run-ovn\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.783569 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.784250 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.790504 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-kube-api-access-8kltp" (OuterVolumeSpecName: "kube-api-access-8kltp") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "kube-api-access-8kltp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.791027 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.804261 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-scripts" (OuterVolumeSpecName: "scripts") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.814874 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.826649 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cfda6b9-44dc-4c93-9013-bf315a3bf92d" (UID: "7cfda6b9-44dc-4c93-9013-bf315a3bf92d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.884832 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjtpv\" (UniqueName: \"kubernetes.io/projected/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-kube-api-access-bjtpv\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885268 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run-ovn\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885406 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-log-ovn\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885508 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-scripts\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885783 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-additional-scripts\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885929 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886141 5028 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886270 5028 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886362 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kltp\" (UniqueName: \"kubernetes.io/projected/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-kube-api-access-8kltp\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886453 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886525 5028 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886589 5028 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886649 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfda6b9-44dc-4c93-9013-bf315a3bf92d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885582 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run-ovn\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886759 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-additional-scripts\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.885674 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-log-ovn\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.886307 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.887656 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-scripts\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:15 crc kubenswrapper[5028]: I1123 07:09:15.901178 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjtpv\" (UniqueName: \"kubernetes.io/projected/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-kube-api-access-bjtpv\") pod \"ovn-controller-7xfsr-config-wtknh\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.017087 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.274828 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xfsr-config-wtknh"] Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.377599 5028 generic.go:334] "Generic (PLEG): container finished" podID="a0bfc63d-fbf6-48be-b850-a3370894112b" containerID="0ae8d8e3dd53a9174fc5d55f7293f9f537efa92b9a5ea3f89fa1c1f4fa8c615e" exitCode=0 Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.377811 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cc3-account-create-kq8vp" event={"ID":"a0bfc63d-fbf6-48be-b850-a3370894112b","Type":"ContainerDied","Data":"0ae8d8e3dd53a9174fc5d55f7293f9f537efa92b9a5ea3f89fa1c1f4fa8c615e"} Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.386341 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lhkql" event={"ID":"7cfda6b9-44dc-4c93-9013-bf315a3bf92d","Type":"ContainerDied","Data":"f49b90d003b65e989724b5b923cb44b3ac1f16cde5e51af1e19ef5fe6d079887"} Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.386389 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f49b90d003b65e989724b5b923cb44b3ac1f16cde5e51af1e19ef5fe6d079887" Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.386396 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lhkql" Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.387964 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr-config-wtknh" event={"ID":"c29b4bbb-7011-42a5-8f9a-e31f2efda51e","Type":"ContainerStarted","Data":"55e633c764e509f72d76f2d69fcf513642edc87a43742fb879b5b83a418f7f9c"} Nov 23 07:09:16 crc kubenswrapper[5028]: I1123 07:09:16.909587 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snkp6" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.010166 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drx8k\" (UniqueName: \"kubernetes.io/projected/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-kube-api-access-drx8k\") pod \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.010233 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-operator-scripts\") pod \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\" (UID: \"b9255961-4bb2-4ffc-af3d-9fd8998c59d6\") " Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.011306 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9255961-4bb2-4ffc-af3d-9fd8998c59d6" (UID: "b9255961-4bb2-4ffc-af3d-9fd8998c59d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.018164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-kube-api-access-drx8k" (OuterVolumeSpecName: "kube-api-access-drx8k") pod "b9255961-4bb2-4ffc-af3d-9fd8998c59d6" (UID: "b9255961-4bb2-4ffc-af3d-9fd8998c59d6"). InnerVolumeSpecName "kube-api-access-drx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.115831 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drx8k\" (UniqueName: \"kubernetes.io/projected/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-kube-api-access-drx8k\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.115875 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9255961-4bb2-4ffc-af3d-9fd8998c59d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.396266 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-snkp6" event={"ID":"b9255961-4bb2-4ffc-af3d-9fd8998c59d6","Type":"ContainerDied","Data":"a1c94d8943548cac755ce528394498812ee7925254e1319b357ac8bdba237416"} Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.396307 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c94d8943548cac755ce528394498812ee7925254e1319b357ac8bdba237416" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.396374 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-snkp6" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.400348 5028 generic.go:334] "Generic (PLEG): container finished" podID="c29b4bbb-7011-42a5-8f9a-e31f2efda51e" containerID="2386705abd6eaf35538b22b345a492d38b6d23697b55057e0679f33a8700fefd" exitCode=0 Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.400398 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr-config-wtknh" event={"ID":"c29b4bbb-7011-42a5-8f9a-e31f2efda51e","Type":"ContainerDied","Data":"2386705abd6eaf35538b22b345a492d38b6d23697b55057e0679f33a8700fefd"} Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.699327 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.825680 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd5q5\" (UniqueName: \"kubernetes.io/projected/a0bfc63d-fbf6-48be-b850-a3370894112b-kube-api-access-sd5q5\") pod \"a0bfc63d-fbf6-48be-b850-a3370894112b\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.825723 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bfc63d-fbf6-48be-b850-a3370894112b-operator-scripts\") pod \"a0bfc63d-fbf6-48be-b850-a3370894112b\" (UID: \"a0bfc63d-fbf6-48be-b850-a3370894112b\") " Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.826609 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bfc63d-fbf6-48be-b850-a3370894112b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0bfc63d-fbf6-48be-b850-a3370894112b" (UID: "a0bfc63d-fbf6-48be-b850-a3370894112b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.829896 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bfc63d-fbf6-48be-b850-a3370894112b-kube-api-access-sd5q5" (OuterVolumeSpecName: "kube-api-access-sd5q5") pod "a0bfc63d-fbf6-48be-b850-a3370894112b" (UID: "a0bfc63d-fbf6-48be-b850-a3370894112b"). InnerVolumeSpecName "kube-api-access-sd5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.927166 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd5q5\" (UniqueName: \"kubernetes.io/projected/a0bfc63d-fbf6-48be-b850-a3370894112b-kube-api-access-sd5q5\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:17 crc kubenswrapper[5028]: I1123 07:09:17.927196 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0bfc63d-fbf6-48be-b850-a3370894112b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.409372 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8cc3-account-create-kq8vp" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.411084 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8cc3-account-create-kq8vp" event={"ID":"a0bfc63d-fbf6-48be-b850-a3370894112b","Type":"ContainerDied","Data":"2e4647221a8aa74ae1c1f68b496382b5e02c5b5cd74af3fee0dad16764db78d3"} Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.411134 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e4647221a8aa74ae1c1f68b496382b5e02c5b5cd74af3fee0dad16764db78d3" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.738402 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840202 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-log-ovn\") pod \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840378 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c29b4bbb-7011-42a5-8f9a-e31f2efda51e" (UID: "c29b4bbb-7011-42a5-8f9a-e31f2efda51e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840422 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-additional-scripts\") pod \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840482 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-scripts\") pod \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840503 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjtpv\" (UniqueName: \"kubernetes.io/projected/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-kube-api-access-bjtpv\") pod \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840569 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run-ovn\") pod \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.840664 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run\") pod \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\" (UID: \"c29b4bbb-7011-42a5-8f9a-e31f2efda51e\") " Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.841087 5028 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.841134 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c29b4bbb-7011-42a5-8f9a-e31f2efda51e" (UID: "c29b4bbb-7011-42a5-8f9a-e31f2efda51e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.841157 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run" (OuterVolumeSpecName: "var-run") pod "c29b4bbb-7011-42a5-8f9a-e31f2efda51e" (UID: "c29b4bbb-7011-42a5-8f9a-e31f2efda51e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.841278 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c29b4bbb-7011-42a5-8f9a-e31f2efda51e" (UID: "c29b4bbb-7011-42a5-8f9a-e31f2efda51e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.842292 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-scripts" (OuterVolumeSpecName: "scripts") pod "c29b4bbb-7011-42a5-8f9a-e31f2efda51e" (UID: "c29b4bbb-7011-42a5-8f9a-e31f2efda51e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.845306 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-kube-api-access-bjtpv" (OuterVolumeSpecName: "kube-api-access-bjtpv") pod "c29b4bbb-7011-42a5-8f9a-e31f2efda51e" (UID: "c29b4bbb-7011-42a5-8f9a-e31f2efda51e"). InnerVolumeSpecName "kube-api-access-bjtpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.942095 5028 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.942135 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.942147 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjtpv\" (UniqueName: \"kubernetes.io/projected/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-kube-api-access-bjtpv\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.942160 5028 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:18 crc kubenswrapper[5028]: I1123 07:09:18.942176 5028 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c29b4bbb-7011-42a5-8f9a-e31f2efda51e-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.418466 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr-config-wtknh" event={"ID":"c29b4bbb-7011-42a5-8f9a-e31f2efda51e","Type":"ContainerDied","Data":"55e633c764e509f72d76f2d69fcf513642edc87a43742fb879b5b83a418f7f9c"} Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.418509 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e633c764e509f72d76f2d69fcf513642edc87a43742fb879b5b83a418f7f9c" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.418604 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr-config-wtknh" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616421 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-swbsc"] Nov 23 07:09:19 crc kubenswrapper[5028]: E1123 07:09:19.616722 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfda6b9-44dc-4c93-9013-bf315a3bf92d" containerName="swift-ring-rebalance" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616740 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfda6b9-44dc-4c93-9013-bf315a3bf92d" containerName="swift-ring-rebalance" Nov 23 07:09:19 crc kubenswrapper[5028]: E1123 07:09:19.616753 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29b4bbb-7011-42a5-8f9a-e31f2efda51e" containerName="ovn-config" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616759 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29b4bbb-7011-42a5-8f9a-e31f2efda51e" containerName="ovn-config" Nov 23 07:09:19 crc kubenswrapper[5028]: E1123 07:09:19.616792 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9255961-4bb2-4ffc-af3d-9fd8998c59d6" containerName="mariadb-database-create" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616798 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9255961-4bb2-4ffc-af3d-9fd8998c59d6" containerName="mariadb-database-create" Nov 23 07:09:19 crc kubenswrapper[5028]: E1123 07:09:19.616811 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0bfc63d-fbf6-48be-b850-a3370894112b" containerName="mariadb-account-create" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616816 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0bfc63d-fbf6-48be-b850-a3370894112b" containerName="mariadb-account-create" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616973 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfda6b9-44dc-4c93-9013-bf315a3bf92d" containerName="swift-ring-rebalance" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.616994 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29b4bbb-7011-42a5-8f9a-e31f2efda51e" containerName="ovn-config" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.617011 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0bfc63d-fbf6-48be-b850-a3370894112b" containerName="mariadb-account-create" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.617025 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9255961-4bb2-4ffc-af3d-9fd8998c59d6" containerName="mariadb-database-create" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.617504 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.619847 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fk5p7" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.620623 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.629309 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-swbsc"] Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.755235 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-config-data\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.755316 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklsl\" (UniqueName: \"kubernetes.io/projected/b1c7363e-bafb-4e60-87bb-bb66f77d5943-kube-api-access-bklsl\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.755339 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-db-sync-config-data\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.755394 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-combined-ca-bundle\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.829758 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7xfsr-config-wtknh"] Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.837653 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7xfsr-config-wtknh"] Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.856890 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-combined-ca-bundle\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.857003 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-config-data\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.857041 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bklsl\" (UniqueName: \"kubernetes.io/projected/b1c7363e-bafb-4e60-87bb-bb66f77d5943-kube-api-access-bklsl\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.857062 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-db-sync-config-data\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.860911 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-db-sync-config-data\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.861003 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-combined-ca-bundle\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.862412 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-config-data\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.874481 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bklsl\" (UniqueName: \"kubernetes.io/projected/b1c7363e-bafb-4e60-87bb-bb66f77d5943-kube-api-access-bklsl\") pod \"glance-db-sync-swbsc\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:19 crc kubenswrapper[5028]: I1123 07:09:19.942092 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:20 crc kubenswrapper[5028]: I1123 07:09:20.350474 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7xfsr" Nov 23 07:09:20 crc kubenswrapper[5028]: I1123 07:09:20.417819 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-swbsc"] Nov 23 07:09:20 crc kubenswrapper[5028]: W1123 07:09:20.430282 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1c7363e_bafb_4e60_87bb_bb66f77d5943.slice/crio-168dedc2fd8436c71202e56af8d2138d0a5c54e4baf551889cf77d3917ff36ec WatchSource:0}: Error finding container 168dedc2fd8436c71202e56af8d2138d0a5c54e4baf551889cf77d3917ff36ec: Status 404 returned error can't find the container with id 168dedc2fd8436c71202e56af8d2138d0a5c54e4baf551889cf77d3917ff36ec Nov 23 07:09:21 crc kubenswrapper[5028]: I1123 07:09:21.063092 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29b4bbb-7011-42a5-8f9a-e31f2efda51e" path="/var/lib/kubelet/pods/c29b4bbb-7011-42a5-8f9a-e31f2efda51e/volumes" Nov 23 07:09:21 crc kubenswrapper[5028]: I1123 07:09:21.439338 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-swbsc" event={"ID":"b1c7363e-bafb-4e60-87bb-bb66f77d5943","Type":"ContainerStarted","Data":"168dedc2fd8436c71202e56af8d2138d0a5c54e4baf551889cf77d3917ff36ec"} Nov 23 07:09:24 crc kubenswrapper[5028]: I1123 07:09:24.464267 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1","Type":"ContainerStarted","Data":"133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c"} Nov 23 07:09:24 crc kubenswrapper[5028]: I1123 07:09:24.465528 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 07:09:24 crc kubenswrapper[5028]: I1123 07:09:24.496125 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.093427887 podStartE2EDuration="37.496107537s" podCreationTimestamp="2025-11-23 07:08:47 +0000 UTC" firstStartedPulling="2025-11-23 07:08:48.602154423 +0000 UTC m=+1112.299559212" lastFinishedPulling="2025-11-23 07:09:24.004834083 +0000 UTC m=+1147.702238862" observedRunningTime="2025-11-23 07:09:24.485186492 +0000 UTC m=+1148.182591271" watchObservedRunningTime="2025-11-23 07:09:24.496107537 +0000 UTC m=+1148.193512316" Nov 23 07:09:24 crc kubenswrapper[5028]: I1123 07:09:24.958063 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:09:24 crc kubenswrapper[5028]: I1123 07:09:24.979708 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"swift-storage-0\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " pod="openstack/swift-storage-0" Nov 23 07:09:25 crc kubenswrapper[5028]: I1123 07:09:25.124249 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.255137 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.631165 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lp7d2"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.632302 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.639693 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0e8f-account-create-bv7pl"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.640693 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.644183 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.648763 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0e8f-account-create-bv7pl"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.656687 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lp7d2"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.697320 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmfs\" (UniqueName: \"kubernetes.io/projected/2fe660d5-bccb-427c-8e24-ee10b19d38cb-kube-api-access-dlmfs\") pod \"cinder-db-create-lp7d2\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.697431 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fe660d5-bccb-427c-8e24-ee10b19d38cb-operator-scripts\") pod \"cinder-db-create-lp7d2\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.736325 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-c952v"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.737547 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c952v" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.753440 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c952v"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.776751 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5527-account-create-swksf"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.778579 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.786103 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.798605 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fe660d5-bccb-427c-8e24-ee10b19d38cb-operator-scripts\") pod \"cinder-db-create-lp7d2\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.798684 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551aa7b7-8791-467e-9d61-0061389e8095-operator-scripts\") pod \"cinder-0e8f-account-create-bv7pl\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.798704 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmfs\" (UniqueName: \"kubernetes.io/projected/2fe660d5-bccb-427c-8e24-ee10b19d38cb-kube-api-access-dlmfs\") pod \"cinder-db-create-lp7d2\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.798725 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhd7\" (UniqueName: \"kubernetes.io/projected/551aa7b7-8791-467e-9d61-0061389e8095-kube-api-access-jrhd7\") pod \"cinder-0e8f-account-create-bv7pl\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.799428 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fe660d5-bccb-427c-8e24-ee10b19d38cb-operator-scripts\") pod \"cinder-db-create-lp7d2\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.810517 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5527-account-create-swksf"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.833298 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.852730 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmfs\" (UniqueName: \"kubernetes.io/projected/2fe660d5-bccb-427c-8e24-ee10b19d38cb-kube-api-access-dlmfs\") pod \"cinder-db-create-lp7d2\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.908907 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551aa7b7-8791-467e-9d61-0061389e8095-operator-scripts\") pod \"cinder-0e8f-account-create-bv7pl\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.909250 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhd7\" (UniqueName: \"kubernetes.io/projected/551aa7b7-8791-467e-9d61-0061389e8095-kube-api-access-jrhd7\") pod \"cinder-0e8f-account-create-bv7pl\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.909310 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-operator-scripts\") pod \"barbican-5527-account-create-swksf\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.909379 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/740e0b0c-f37c-4acf-8b98-847b26213c28-operator-scripts\") pod \"barbican-db-create-c952v\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " pod="openstack/barbican-db-create-c952v" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.909395 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s8g2\" (UniqueName: \"kubernetes.io/projected/740e0b0c-f37c-4acf-8b98-847b26213c28-kube-api-access-5s8g2\") pod \"barbican-db-create-c952v\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " pod="openstack/barbican-db-create-c952v" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.909422 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pksz7\" (UniqueName: \"kubernetes.io/projected/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-kube-api-access-pksz7\") pod \"barbican-5527-account-create-swksf\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.910642 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551aa7b7-8791-467e-9d61-0061389e8095-operator-scripts\") pod \"cinder-0e8f-account-create-bv7pl\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.965083 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.979823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhd7\" (UniqueName: \"kubernetes.io/projected/551aa7b7-8791-467e-9d61-0061389e8095-kube-api-access-jrhd7\") pod \"cinder-0e8f-account-create-bv7pl\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.981311 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8471-account-create-4q7s2"] Nov 23 07:09:26 crc kubenswrapper[5028]: I1123 07:09:26.997935 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.000487 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.004012 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8nbsx"] Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.005524 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.012512 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-operator-scripts\") pod \"barbican-5527-account-create-swksf\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.014009 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/740e0b0c-f37c-4acf-8b98-847b26213c28-operator-scripts\") pod \"barbican-db-create-c952v\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " pod="openstack/barbican-db-create-c952v" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.014138 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s8g2\" (UniqueName: \"kubernetes.io/projected/740e0b0c-f37c-4acf-8b98-847b26213c28-kube-api-access-5s8g2\") pod \"barbican-db-create-c952v\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " pod="openstack/barbican-db-create-c952v" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.014237 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pksz7\" (UniqueName: \"kubernetes.io/projected/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-kube-api-access-pksz7\") pod \"barbican-5527-account-create-swksf\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.013911 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-operator-scripts\") pod \"barbican-5527-account-create-swksf\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.015273 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/740e0b0c-f37c-4acf-8b98-847b26213c28-operator-scripts\") pod \"barbican-db-create-c952v\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " pod="openstack/barbican-db-create-c952v" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.033017 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8471-account-create-4q7s2"] Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.071559 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pksz7\" (UniqueName: \"kubernetes.io/projected/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-kube-api-access-pksz7\") pod \"barbican-5527-account-create-swksf\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.092116 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8nbsx"] Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.109721 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s8g2\" (UniqueName: \"kubernetes.io/projected/740e0b0c-f37c-4acf-8b98-847b26213c28-kube-api-access-5s8g2\") pod \"barbican-db-create-c952v\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " pod="openstack/barbican-db-create-c952v" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.110612 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xw8q6"] Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.112899 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.110906 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.116365 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xw8q6"] Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.118004 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.121462 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60cf19a2-c7e4-40db-a8f9-6d562989323a-operator-scripts\") pod \"neutron-db-create-8nbsx\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.128424 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxc9\" (UniqueName: \"kubernetes.io/projected/be0d6135-0757-4a02-9c31-ccde549d04e6-kube-api-access-fbxc9\") pod \"neutron-8471-account-create-4q7s2\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.128542 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ffl\" (UniqueName: \"kubernetes.io/projected/60cf19a2-c7e4-40db-a8f9-6d562989323a-kube-api-access-n6ffl\") pod \"neutron-db-create-8nbsx\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.128628 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0d6135-0757-4a02-9c31-ccde549d04e6-operator-scripts\") pod \"neutron-8471-account-create-4q7s2\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.122174 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-22nmw" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.122206 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.122282 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.236754 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60cf19a2-c7e4-40db-a8f9-6d562989323a-operator-scripts\") pod \"neutron-db-create-8nbsx\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.237043 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxc9\" (UniqueName: \"kubernetes.io/projected/be0d6135-0757-4a02-9c31-ccde549d04e6-kube-api-access-fbxc9\") pod \"neutron-8471-account-create-4q7s2\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.237442 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60cf19a2-c7e4-40db-a8f9-6d562989323a-operator-scripts\") pod \"neutron-db-create-8nbsx\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.237552 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ffl\" (UniqueName: \"kubernetes.io/projected/60cf19a2-c7e4-40db-a8f9-6d562989323a-kube-api-access-n6ffl\") pod \"neutron-db-create-8nbsx\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.237637 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0d6135-0757-4a02-9c31-ccde549d04e6-operator-scripts\") pod \"neutron-8471-account-create-4q7s2\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.238516 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-combined-ca-bundle\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.239387 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw56v\" (UniqueName: \"kubernetes.io/projected/e6621763-252c-443e-9049-5d13e231e916-kube-api-access-nw56v\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.239482 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-config-data\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.238400 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0d6135-0757-4a02-9c31-ccde549d04e6-operator-scripts\") pod \"neutron-8471-account-create-4q7s2\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.258094 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxc9\" (UniqueName: \"kubernetes.io/projected/be0d6135-0757-4a02-9c31-ccde549d04e6-kube-api-access-fbxc9\") pod \"neutron-8471-account-create-4q7s2\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.258129 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ffl\" (UniqueName: \"kubernetes.io/projected/60cf19a2-c7e4-40db-a8f9-6d562989323a-kube-api-access-n6ffl\") pod \"neutron-db-create-8nbsx\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.263912 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.341020 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-combined-ca-bundle\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.341077 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw56v\" (UniqueName: \"kubernetes.io/projected/e6621763-252c-443e-9049-5d13e231e916-kube-api-access-nw56v\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.341104 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-config-data\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.344879 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-config-data\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.348594 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-combined-ca-bundle\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.363539 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw56v\" (UniqueName: \"kubernetes.io/projected/e6621763-252c-443e-9049-5d13e231e916-kube-api-access-nw56v\") pod \"keystone-db-sync-xw8q6\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.366451 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c952v" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.438449 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.489617 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:27 crc kubenswrapper[5028]: I1123 07:09:27.498855 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.612595 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xw8q6"] Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.852825 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-c952v"] Nov 23 07:09:32 crc kubenswrapper[5028]: W1123 07:09:32.855805 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe0d6135_0757_4a02_9c31_ccde549d04e6.slice/crio-a697d84c60f0ee2160dbcf895ad04026e9faabf9122e11ed8d01759e6a1f52bb WatchSource:0}: Error finding container a697d84c60f0ee2160dbcf895ad04026e9faabf9122e11ed8d01759e6a1f52bb: Status 404 returned error can't find the container with id a697d84c60f0ee2160dbcf895ad04026e9faabf9122e11ed8d01759e6a1f52bb Nov 23 07:09:32 crc kubenswrapper[5028]: W1123 07:09:32.867393 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60cf19a2_c7e4_40db_a8f9_6d562989323a.slice/crio-7dad51351e7c43d5820b27b5421671263e810ee365034bac8ba0d6e60c88c91b WatchSource:0}: Error finding container 7dad51351e7c43d5820b27b5421671263e810ee365034bac8ba0d6e60c88c91b: Status 404 returned error can't find the container with id 7dad51351e7c43d5820b27b5421671263e810ee365034bac8ba0d6e60c88c91b Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.868491 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8471-account-create-4q7s2"] Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.875974 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5527-account-create-swksf"] Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.882523 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0e8f-account-create-bv7pl"] Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.888868 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lp7d2"] Nov 23 07:09:32 crc kubenswrapper[5028]: I1123 07:09:32.894462 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8nbsx"] Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.109169 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.564956 5028 generic.go:334] "Generic (PLEG): container finished" podID="be0d6135-0757-4a02-9c31-ccde549d04e6" containerID="88644cdbb4374bf4c073a29fd16593f8d69ed669e05515272f3ea6f3db4edd4c" exitCode=0 Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.565055 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8471-account-create-4q7s2" event={"ID":"be0d6135-0757-4a02-9c31-ccde549d04e6","Type":"ContainerDied","Data":"88644cdbb4374bf4c073a29fd16593f8d69ed669e05515272f3ea6f3db4edd4c"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.565105 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8471-account-create-4q7s2" event={"ID":"be0d6135-0757-4a02-9c31-ccde549d04e6","Type":"ContainerStarted","Data":"a697d84c60f0ee2160dbcf895ad04026e9faabf9122e11ed8d01759e6a1f52bb"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.566488 5028 generic.go:334] "Generic (PLEG): container finished" podID="9d2c10f1-db6c-432e-a8d5-f695179ecd2f" containerID="8cbb2a112cb05c3a4ff0551cbc9cb7a30fe09d0a7e86376732f2ee0979b8a58a" exitCode=0 Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.566558 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5527-account-create-swksf" event={"ID":"9d2c10f1-db6c-432e-a8d5-f695179ecd2f","Type":"ContainerDied","Data":"8cbb2a112cb05c3a4ff0551cbc9cb7a30fe09d0a7e86376732f2ee0979b8a58a"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.566578 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5527-account-create-swksf" event={"ID":"9d2c10f1-db6c-432e-a8d5-f695179ecd2f","Type":"ContainerStarted","Data":"3553f8bbcb4a63e3fbed5129036ffddff73f8e8f89e8db1e1027712deafbec3c"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.568601 5028 generic.go:334] "Generic (PLEG): container finished" podID="2fe660d5-bccb-427c-8e24-ee10b19d38cb" containerID="3bb8b33fe6e5b4407f5b6a3e5f6c662c7dc73f42b9338e7c86cf692773a77030" exitCode=0 Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.568657 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lp7d2" event={"ID":"2fe660d5-bccb-427c-8e24-ee10b19d38cb","Type":"ContainerDied","Data":"3bb8b33fe6e5b4407f5b6a3e5f6c662c7dc73f42b9338e7c86cf692773a77030"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.568675 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lp7d2" event={"ID":"2fe660d5-bccb-427c-8e24-ee10b19d38cb","Type":"ContainerStarted","Data":"6075cdd15fb3b86e5a103ef183bc9e1c0ca5a4bd851fdae58793efa50ec7cd74"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.573866 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xw8q6" event={"ID":"e6621763-252c-443e-9049-5d13e231e916","Type":"ContainerStarted","Data":"f155b43bd9143c896bab3c0ff5e73cb5acea837944663a54c8666ea32d70828f"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.574703 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"1b57be1d0736719a2582d7aa1ff102592a68ff554c23c6adad468c94b01affc7"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.575704 5028 generic.go:334] "Generic (PLEG): container finished" podID="740e0b0c-f37c-4acf-8b98-847b26213c28" containerID="630b0afa1537ce69941a71ded3e76c91b4b8f7e406c4cce4564688b1413fee1a" exitCode=0 Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.575746 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c952v" event={"ID":"740e0b0c-f37c-4acf-8b98-847b26213c28","Type":"ContainerDied","Data":"630b0afa1537ce69941a71ded3e76c91b4b8f7e406c4cce4564688b1413fee1a"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.575761 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c952v" event={"ID":"740e0b0c-f37c-4acf-8b98-847b26213c28","Type":"ContainerStarted","Data":"ba964f9f3b4fadc3809b61a3eeb8be605f6040f9387614bcdd3c2edfd59380c8"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.576861 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-swbsc" event={"ID":"b1c7363e-bafb-4e60-87bb-bb66f77d5943","Type":"ContainerStarted","Data":"da3485b8f367c453a3b55731c8c85e776b41135de878e8be472158c1fc4d40f3"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.578567 5028 generic.go:334] "Generic (PLEG): container finished" podID="60cf19a2-c7e4-40db-a8f9-6d562989323a" containerID="ae5b287a41e57ddc9ba6125b4534965ed7c6b5a24e3cb68dd46326b1643008bc" exitCode=0 Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.578607 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8nbsx" event={"ID":"60cf19a2-c7e4-40db-a8f9-6d562989323a","Type":"ContainerDied","Data":"ae5b287a41e57ddc9ba6125b4534965ed7c6b5a24e3cb68dd46326b1643008bc"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.578624 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8nbsx" event={"ID":"60cf19a2-c7e4-40db-a8f9-6d562989323a","Type":"ContainerStarted","Data":"7dad51351e7c43d5820b27b5421671263e810ee365034bac8ba0d6e60c88c91b"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.581493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e8f-account-create-bv7pl" event={"ID":"551aa7b7-8791-467e-9d61-0061389e8095","Type":"ContainerDied","Data":"1f51e7c906609c0ffde79dcfa255aa9e3cf5469f48ac52a4d77c96aaa45cadd1"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.582572 5028 generic.go:334] "Generic (PLEG): container finished" podID="551aa7b7-8791-467e-9d61-0061389e8095" containerID="1f51e7c906609c0ffde79dcfa255aa9e3cf5469f48ac52a4d77c96aaa45cadd1" exitCode=0 Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.582644 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e8f-account-create-bv7pl" event={"ID":"551aa7b7-8791-467e-9d61-0061389e8095","Type":"ContainerStarted","Data":"742726121c13b6ffde2c891f0b6e33b3bd1143b57049745bc7178baa07860cf0"} Nov 23 07:09:33 crc kubenswrapper[5028]: I1123 07:09:33.648375 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-swbsc" podStartSLOduration=2.7818046819999998 podStartE2EDuration="14.648360189s" podCreationTimestamp="2025-11-23 07:09:19 +0000 UTC" firstStartedPulling="2025-11-23 07:09:20.432593709 +0000 UTC m=+1144.129998488" lastFinishedPulling="2025-11-23 07:09:32.299149216 +0000 UTC m=+1155.996553995" observedRunningTime="2025-11-23 07:09:33.644059875 +0000 UTC m=+1157.341464654" watchObservedRunningTime="2025-11-23 07:09:33.648360189 +0000 UTC m=+1157.345764968" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.607512 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-c952v" event={"ID":"740e0b0c-f37c-4acf-8b98-847b26213c28","Type":"ContainerDied","Data":"ba964f9f3b4fadc3809b61a3eeb8be605f6040f9387614bcdd3c2edfd59380c8"} Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.608256 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba964f9f3b4fadc3809b61a3eeb8be605f6040f9387614bcdd3c2edfd59380c8" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.613340 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8471-account-create-4q7s2" event={"ID":"be0d6135-0757-4a02-9c31-ccde549d04e6","Type":"ContainerDied","Data":"a697d84c60f0ee2160dbcf895ad04026e9faabf9122e11ed8d01759e6a1f52bb"} Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.613383 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a697d84c60f0ee2160dbcf895ad04026e9faabf9122e11ed8d01759e6a1f52bb" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.617666 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5527-account-create-swksf" event={"ID":"9d2c10f1-db6c-432e-a8d5-f695179ecd2f","Type":"ContainerDied","Data":"3553f8bbcb4a63e3fbed5129036ffddff73f8e8f89e8db1e1027712deafbec3c"} Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.617697 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3553f8bbcb4a63e3fbed5129036ffddff73f8e8f89e8db1e1027712deafbec3c" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.619691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lp7d2" event={"ID":"2fe660d5-bccb-427c-8e24-ee10b19d38cb","Type":"ContainerDied","Data":"6075cdd15fb3b86e5a103ef183bc9e1c0ca5a4bd851fdae58793efa50ec7cd74"} Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.619726 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6075cdd15fb3b86e5a103ef183bc9e1c0ca5a4bd851fdae58793efa50ec7cd74" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.620747 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8nbsx" event={"ID":"60cf19a2-c7e4-40db-a8f9-6d562989323a","Type":"ContainerDied","Data":"7dad51351e7c43d5820b27b5421671263e810ee365034bac8ba0d6e60c88c91b"} Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.620775 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dad51351e7c43d5820b27b5421671263e810ee365034bac8ba0d6e60c88c91b" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.622660 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0e8f-account-create-bv7pl" event={"ID":"551aa7b7-8791-467e-9d61-0061389e8095","Type":"ContainerDied","Data":"742726121c13b6ffde2c891f0b6e33b3bd1143b57049745bc7178baa07860cf0"} Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.622692 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742726121c13b6ffde2c891f0b6e33b3bd1143b57049745bc7178baa07860cf0" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.810524 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.857352 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.891298 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.904646 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.914223 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pksz7\" (UniqueName: \"kubernetes.io/projected/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-kube-api-access-pksz7\") pod \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.914279 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-operator-scripts\") pod \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\" (UID: \"9d2c10f1-db6c-432e-a8d5-f695179ecd2f\") " Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.915416 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d2c10f1-db6c-432e-a8d5-f695179ecd2f" (UID: "9d2c10f1-db6c-432e-a8d5-f695179ecd2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.927177 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-kube-api-access-pksz7" (OuterVolumeSpecName: "kube-api-access-pksz7") pod "9d2c10f1-db6c-432e-a8d5-f695179ecd2f" (UID: "9d2c10f1-db6c-432e-a8d5-f695179ecd2f"). InnerVolumeSpecName "kube-api-access-pksz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.950726 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:36 crc kubenswrapper[5028]: I1123 07:09:36.988526 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c952v" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.015880 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fe660d5-bccb-427c-8e24-ee10b19d38cb-operator-scripts\") pod \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.015963 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551aa7b7-8791-467e-9d61-0061389e8095-operator-scripts\") pod \"551aa7b7-8791-467e-9d61-0061389e8095\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016046 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ffl\" (UniqueName: \"kubernetes.io/projected/60cf19a2-c7e4-40db-a8f9-6d562989323a-kube-api-access-n6ffl\") pod \"60cf19a2-c7e4-40db-a8f9-6d562989323a\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016076 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0d6135-0757-4a02-9c31-ccde549d04e6-operator-scripts\") pod \"be0d6135-0757-4a02-9c31-ccde549d04e6\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016112 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60cf19a2-c7e4-40db-a8f9-6d562989323a-operator-scripts\") pod \"60cf19a2-c7e4-40db-a8f9-6d562989323a\" (UID: \"60cf19a2-c7e4-40db-a8f9-6d562989323a\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016229 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmfs\" (UniqueName: \"kubernetes.io/projected/2fe660d5-bccb-427c-8e24-ee10b19d38cb-kube-api-access-dlmfs\") pod \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\" (UID: \"2fe660d5-bccb-427c-8e24-ee10b19d38cb\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016294 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxc9\" (UniqueName: \"kubernetes.io/projected/be0d6135-0757-4a02-9c31-ccde549d04e6-kube-api-access-fbxc9\") pod \"be0d6135-0757-4a02-9c31-ccde549d04e6\" (UID: \"be0d6135-0757-4a02-9c31-ccde549d04e6\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016358 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhd7\" (UniqueName: \"kubernetes.io/projected/551aa7b7-8791-467e-9d61-0061389e8095-kube-api-access-jrhd7\") pod \"551aa7b7-8791-467e-9d61-0061389e8095\" (UID: \"551aa7b7-8791-467e-9d61-0061389e8095\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016698 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pksz7\" (UniqueName: \"kubernetes.io/projected/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-kube-api-access-pksz7\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.016715 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d2c10f1-db6c-432e-a8d5-f695179ecd2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.017461 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0d6135-0757-4a02-9c31-ccde549d04e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be0d6135-0757-4a02-9c31-ccde549d04e6" (UID: "be0d6135-0757-4a02-9c31-ccde549d04e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.017471 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/551aa7b7-8791-467e-9d61-0061389e8095-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "551aa7b7-8791-467e-9d61-0061389e8095" (UID: "551aa7b7-8791-467e-9d61-0061389e8095"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.017901 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60cf19a2-c7e4-40db-a8f9-6d562989323a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60cf19a2-c7e4-40db-a8f9-6d562989323a" (UID: "60cf19a2-c7e4-40db-a8f9-6d562989323a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.017986 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fe660d5-bccb-427c-8e24-ee10b19d38cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fe660d5-bccb-427c-8e24-ee10b19d38cb" (UID: "2fe660d5-bccb-427c-8e24-ee10b19d38cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.020122 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551aa7b7-8791-467e-9d61-0061389e8095-kube-api-access-jrhd7" (OuterVolumeSpecName: "kube-api-access-jrhd7") pod "551aa7b7-8791-467e-9d61-0061389e8095" (UID: "551aa7b7-8791-467e-9d61-0061389e8095"). InnerVolumeSpecName "kube-api-access-jrhd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.020898 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60cf19a2-c7e4-40db-a8f9-6d562989323a-kube-api-access-n6ffl" (OuterVolumeSpecName: "kube-api-access-n6ffl") pod "60cf19a2-c7e4-40db-a8f9-6d562989323a" (UID: "60cf19a2-c7e4-40db-a8f9-6d562989323a"). InnerVolumeSpecName "kube-api-access-n6ffl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.021466 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fe660d5-bccb-427c-8e24-ee10b19d38cb-kube-api-access-dlmfs" (OuterVolumeSpecName: "kube-api-access-dlmfs") pod "2fe660d5-bccb-427c-8e24-ee10b19d38cb" (UID: "2fe660d5-bccb-427c-8e24-ee10b19d38cb"). InnerVolumeSpecName "kube-api-access-dlmfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.021511 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0d6135-0757-4a02-9c31-ccde549d04e6-kube-api-access-fbxc9" (OuterVolumeSpecName: "kube-api-access-fbxc9") pod "be0d6135-0757-4a02-9c31-ccde549d04e6" (UID: "be0d6135-0757-4a02-9c31-ccde549d04e6"). InnerVolumeSpecName "kube-api-access-fbxc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.117339 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s8g2\" (UniqueName: \"kubernetes.io/projected/740e0b0c-f37c-4acf-8b98-847b26213c28-kube-api-access-5s8g2\") pod \"740e0b0c-f37c-4acf-8b98-847b26213c28\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.117549 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/740e0b0c-f37c-4acf-8b98-847b26213c28-operator-scripts\") pod \"740e0b0c-f37c-4acf-8b98-847b26213c28\" (UID: \"740e0b0c-f37c-4acf-8b98-847b26213c28\") " Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.117971 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60cf19a2-c7e4-40db-a8f9-6d562989323a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.117997 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmfs\" (UniqueName: \"kubernetes.io/projected/2fe660d5-bccb-427c-8e24-ee10b19d38cb-kube-api-access-dlmfs\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118013 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxc9\" (UniqueName: \"kubernetes.io/projected/be0d6135-0757-4a02-9c31-ccde549d04e6-kube-api-access-fbxc9\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118030 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhd7\" (UniqueName: \"kubernetes.io/projected/551aa7b7-8791-467e-9d61-0061389e8095-kube-api-access-jrhd7\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118042 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fe660d5-bccb-427c-8e24-ee10b19d38cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118054 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/551aa7b7-8791-467e-9d61-0061389e8095-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118067 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ffl\" (UniqueName: \"kubernetes.io/projected/60cf19a2-c7e4-40db-a8f9-6d562989323a-kube-api-access-n6ffl\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118078 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0d6135-0757-4a02-9c31-ccde549d04e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.118042 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/740e0b0c-f37c-4acf-8b98-847b26213c28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "740e0b0c-f37c-4acf-8b98-847b26213c28" (UID: "740e0b0c-f37c-4acf-8b98-847b26213c28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.120788 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740e0b0c-f37c-4acf-8b98-847b26213c28-kube-api-access-5s8g2" (OuterVolumeSpecName: "kube-api-access-5s8g2") pod "740e0b0c-f37c-4acf-8b98-847b26213c28" (UID: "740e0b0c-f37c-4acf-8b98-847b26213c28"). InnerVolumeSpecName "kube-api-access-5s8g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.220603 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/740e0b0c-f37c-4acf-8b98-847b26213c28-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.220630 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s8g2\" (UniqueName: \"kubernetes.io/projected/740e0b0c-f37c-4acf-8b98-847b26213c28-kube-api-access-5s8g2\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.631686 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xw8q6" event={"ID":"e6621763-252c-443e-9049-5d13e231e916","Type":"ContainerStarted","Data":"27bf9b397e896cd8bc8fd40b3f85b6d290c89b1f88de3910d80025bbd49ee0f4"} Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.642424 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0e8f-account-create-bv7pl" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.643238 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lp7d2" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.643283 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"2cff70916c4b86894d8212d9f22c1034cf16138344b62b7e140d3c041d91cee7"} Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.643328 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"284fcc70f4d39f784940ff357d718a373de8c2e8881a64f54fa7a0acceaadf32"} Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.643372 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8nbsx" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.643235 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8471-account-create-4q7s2" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.644227 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-c952v" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.644230 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5527-account-create-swksf" Nov 23 07:09:37 crc kubenswrapper[5028]: I1123 07:09:37.670997 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xw8q6" podStartSLOduration=6.747419688 podStartE2EDuration="10.670975413s" podCreationTimestamp="2025-11-23 07:09:27 +0000 UTC" firstStartedPulling="2025-11-23 07:09:32.629486378 +0000 UTC m=+1156.326891167" lastFinishedPulling="2025-11-23 07:09:36.553042103 +0000 UTC m=+1160.250446892" observedRunningTime="2025-11-23 07:09:37.65770187 +0000 UTC m=+1161.355106649" watchObservedRunningTime="2025-11-23 07:09:37.670975413 +0000 UTC m=+1161.368380202" Nov 23 07:09:38 crc kubenswrapper[5028]: I1123 07:09:38.193620 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 07:09:38 crc kubenswrapper[5028]: I1123 07:09:38.653104 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"51780654d47b4748040c6f8ea75ab63207224c0aa1e348e73e20d5e202474d89"} Nov 23 07:09:38 crc kubenswrapper[5028]: I1123 07:09:38.653144 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"75bf1a6baebd3a81179a9db585726101ffd93df8b27cb4e5da1fb372c8b6ce89"} Nov 23 07:09:46 crc kubenswrapper[5028]: I1123 07:09:46.723722 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"f7f3db02234e290a3cc4660fddd3b6ddc3c347130630535c71abc6cc72896ac8"} Nov 23 07:09:46 crc kubenswrapper[5028]: I1123 07:09:46.724295 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"34d6b82184b9d4d53e4cb202e9b47148b5aa74237f7bd04d23d2f1b5f8f45fee"} Nov 23 07:09:46 crc kubenswrapper[5028]: I1123 07:09:46.724307 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"93de291167c5b5543ba1d794eabbe63b56604ecdcef6943568578c5bb4a29229"} Nov 23 07:09:46 crc kubenswrapper[5028]: I1123 07:09:46.724315 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"f33d8257c3344d1c41e045d276b50855aed400ece4dc05b5dbad0b7e7e645ec1"} Nov 23 07:09:47 crc kubenswrapper[5028]: I1123 07:09:47.734733 5028 generic.go:334] "Generic (PLEG): container finished" podID="e6621763-252c-443e-9049-5d13e231e916" containerID="27bf9b397e896cd8bc8fd40b3f85b6d290c89b1f88de3910d80025bbd49ee0f4" exitCode=0 Nov 23 07:09:47 crc kubenswrapper[5028]: I1123 07:09:47.734911 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xw8q6" event={"ID":"e6621763-252c-443e-9049-5d13e231e916","Type":"ContainerDied","Data":"27bf9b397e896cd8bc8fd40b3f85b6d290c89b1f88de3910d80025bbd49ee0f4"} Nov 23 07:09:47 crc kubenswrapper[5028]: I1123 07:09:47.742047 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"9b2449d180857fa287537e9f2caa3d5b2c6ef6945336b39d4b2e2f1bba6f48e5"} Nov 23 07:09:48 crc kubenswrapper[5028]: I1123 07:09:48.767881 5028 generic.go:334] "Generic (PLEG): container finished" podID="b1c7363e-bafb-4e60-87bb-bb66f77d5943" containerID="da3485b8f367c453a3b55731c8c85e776b41135de878e8be472158c1fc4d40f3" exitCode=0 Nov 23 07:09:48 crc kubenswrapper[5028]: I1123 07:09:48.767985 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-swbsc" event={"ID":"b1c7363e-bafb-4e60-87bb-bb66f77d5943","Type":"ContainerDied","Data":"da3485b8f367c453a3b55731c8c85e776b41135de878e8be472158c1fc4d40f3"} Nov 23 07:09:48 crc kubenswrapper[5028]: I1123 07:09:48.775502 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"2df62fa2c1e66a404dc3b2733961ec36cec00bf7f77ead77869b439ade6e92b1"} Nov 23 07:09:48 crc kubenswrapper[5028]: I1123 07:09:48.775592 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"81a687eb71e007a1ac13feb65bbc68b1c3f2bf021519d85e91cd16f4d603b2f9"} Nov 23 07:09:48 crc kubenswrapper[5028]: I1123 07:09:48.775610 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"3457d5419e6677ab415baff0a0c4f5bcfce5e9c08759e54ca60a97c9fe0f0b09"} Nov 23 07:09:48 crc kubenswrapper[5028]: I1123 07:09:48.775622 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"3758a8edb86f9cc2311dfbbc6420a20b7bb4456290271a1c277bd6e7daaf2d0b"} Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.073689 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.237377 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-combined-ca-bundle\") pod \"e6621763-252c-443e-9049-5d13e231e916\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.237707 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-config-data\") pod \"e6621763-252c-443e-9049-5d13e231e916\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.237887 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw56v\" (UniqueName: \"kubernetes.io/projected/e6621763-252c-443e-9049-5d13e231e916-kube-api-access-nw56v\") pod \"e6621763-252c-443e-9049-5d13e231e916\" (UID: \"e6621763-252c-443e-9049-5d13e231e916\") " Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.244714 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6621763-252c-443e-9049-5d13e231e916-kube-api-access-nw56v" (OuterVolumeSpecName: "kube-api-access-nw56v") pod "e6621763-252c-443e-9049-5d13e231e916" (UID: "e6621763-252c-443e-9049-5d13e231e916"). InnerVolumeSpecName "kube-api-access-nw56v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.261990 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6621763-252c-443e-9049-5d13e231e916" (UID: "e6621763-252c-443e-9049-5d13e231e916"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.284656 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-config-data" (OuterVolumeSpecName: "config-data") pod "e6621763-252c-443e-9049-5d13e231e916" (UID: "e6621763-252c-443e-9049-5d13e231e916"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.340183 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.340213 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6621763-252c-443e-9049-5d13e231e916-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.340222 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw56v\" (UniqueName: \"kubernetes.io/projected/e6621763-252c-443e-9049-5d13e231e916-kube-api-access-nw56v\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.789748 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xw8q6" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.789740 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xw8q6" event={"ID":"e6621763-252c-443e-9049-5d13e231e916","Type":"ContainerDied","Data":"f155b43bd9143c896bab3c0ff5e73cb5acea837944663a54c8666ea32d70828f"} Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.789913 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f155b43bd9143c896bab3c0ff5e73cb5acea837944663a54c8666ea32d70828f" Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.796566 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"1aafd0f3d20a9763f9c844dde7d914b68a8c9b6c1813f1ef8f09835c63225eb8"} Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.796616 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerStarted","Data":"40974909c6c254ecbda3968c5a7f724e9828b2c9c3d631d0c9e90be6edb3e66d"} Nov 23 07:09:49 crc kubenswrapper[5028]: I1123 07:09:49.843908 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.516435119 podStartE2EDuration="58.843890688s" podCreationTimestamp="2025-11-23 07:08:51 +0000 UTC" firstStartedPulling="2025-11-23 07:09:33.11613997 +0000 UTC m=+1156.813544749" lastFinishedPulling="2025-11-23 07:09:47.443595539 +0000 UTC m=+1171.141000318" observedRunningTime="2025-11-23 07:09:49.830308458 +0000 UTC m=+1173.527713257" watchObservedRunningTime="2025-11-23 07:09:49.843890688 +0000 UTC m=+1173.541295467" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018086 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8645678c8c-7d2q2"] Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018676 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2c10f1-db6c-432e-a8d5-f695179ecd2f" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018691 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2c10f1-db6c-432e-a8d5-f695179ecd2f" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018702 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6621763-252c-443e-9049-5d13e231e916" containerName="keystone-db-sync" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018708 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6621763-252c-443e-9049-5d13e231e916" containerName="keystone-db-sync" Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018718 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e0b0c-f37c-4acf-8b98-847b26213c28" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018724 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e0b0c-f37c-4acf-8b98-847b26213c28" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018739 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551aa7b7-8791-467e-9d61-0061389e8095" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018745 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="551aa7b7-8791-467e-9d61-0061389e8095" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018754 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fe660d5-bccb-427c-8e24-ee10b19d38cb" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018759 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fe660d5-bccb-427c-8e24-ee10b19d38cb" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018771 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0d6135-0757-4a02-9c31-ccde549d04e6" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018776 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0d6135-0757-4a02-9c31-ccde549d04e6" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.018789 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cf19a2-c7e4-40db-a8f9-6d562989323a" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018794 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cf19a2-c7e4-40db-a8f9-6d562989323a" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.018982 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fe660d5-bccb-427c-8e24-ee10b19d38cb" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.019002 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6621763-252c-443e-9049-5d13e231e916" containerName="keystone-db-sync" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.019012 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="740e0b0c-f37c-4acf-8b98-847b26213c28" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.019024 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0d6135-0757-4a02-9c31-ccde549d04e6" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.019041 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2c10f1-db6c-432e-a8d5-f695179ecd2f" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.019052 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="60cf19a2-c7e4-40db-a8f9-6d562989323a" containerName="mariadb-database-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.019062 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="551aa7b7-8791-467e-9d61-0061389e8095" containerName="mariadb-account-create" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.020396 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.026214 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nszb5"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.027335 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.031181 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.031220 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.031460 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.031607 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-22nmw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.031984 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.039832 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8645678c8c-7d2q2"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.058979 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nszb5"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155103 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-scripts\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155146 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-sb\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155164 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-config\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155207 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-dns-svc\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155233 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-combined-ca-bundle\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155255 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-nb\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155302 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-fernet-keys\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155335 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-config-data\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155351 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccwwk\" (UniqueName: \"kubernetes.io/projected/52dbb1d3-74ce-40af-a86e-e8d80334d704-kube-api-access-ccwwk\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155402 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-credential-keys\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.155442 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgmn\" (UniqueName: \"kubernetes.io/projected/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-kube-api-access-5vgmn\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.211059 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7hptw"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.212475 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.221468 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.221785 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.221910 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4bshb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.223264 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8645678c8c-7d2q2"] Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.224305 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-5vgmn ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" podUID="a6c1f77f-a062-4a6d-bd97-e9c9d114541f" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257630 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-scripts\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257689 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-sb\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257717 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-config\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257749 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-dns-svc\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257769 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-combined-ca-bundle\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257789 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-nb\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-fernet-keys\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257870 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-config-data\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257890 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccwwk\" (UniqueName: \"kubernetes.io/projected/52dbb1d3-74ce-40af-a86e-e8d80334d704-kube-api-access-ccwwk\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-credential-keys\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.257940 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vgmn\" (UniqueName: \"kubernetes.io/projected/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-kube-api-access-5vgmn\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.258965 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-sb\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.259535 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-config\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.260242 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-nb\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.262340 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-dns-svc\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.264505 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-scripts\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.264570 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7hptw"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.269506 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-fernet-keys\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.278449 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-credential-keys\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.285593 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-combined-ca-bundle\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.295969 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccwwk\" (UniqueName: \"kubernetes.io/projected/52dbb1d3-74ce-40af-a86e-e8d80334d704-kube-api-access-ccwwk\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.297599 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-config-data\") pod \"keystone-bootstrap-nszb5\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.302483 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vgmn\" (UniqueName: \"kubernetes.io/projected/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-kube-api-access-5vgmn\") pod \"dnsmasq-dns-8645678c8c-7d2q2\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.302547 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.304472 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.308916 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.334295 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c7bdb785-zg2hc"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.335114 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.364128 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6hm\" (UniqueName: \"kubernetes.io/projected/8373ad19-cd11-4d27-8936-27132ab9bf72-kube-api-access-8w6hm\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.364264 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-combined-ca-bundle\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.364303 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-config\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.364890 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.384271 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.404170 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.467761 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6hm\" (UniqueName: \"kubernetes.io/projected/8373ad19-cd11-4d27-8936-27132ab9bf72-kube-api-access-8w6hm\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470122 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-swift-storage-0\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470411 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-nb\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470458 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-log-httpd\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470500 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470629 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-sb\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470683 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-run-httpd\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470834 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-combined-ca-bundle\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.470867 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-config\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.479613 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-config\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.479934 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrbf\" (UniqueName: \"kubernetes.io/projected/69d1e57b-89c3-49a4-95dc-537dcedf1c54-kube-api-access-nfrbf\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.480087 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.480220 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-svc\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.480272 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k2f8\" (UniqueName: \"kubernetes.io/projected/426016a6-a8a2-4817-ba4d-3d1662b15b78-kube-api-access-4k2f8\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.480436 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-scripts\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.480486 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-config-data\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.517470 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-config\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.527578 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-combined-ca-bundle\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.528057 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.528761 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6hm\" (UniqueName: \"kubernetes.io/projected/8373ad19-cd11-4d27-8936-27132ab9bf72-kube-api-access-8w6hm\") pod \"neutron-db-sync-7hptw\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.533507 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7hptw" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.544205 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6nfnj"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.544827 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.570373 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fw9rb"] Nov 23 07:09:50 crc kubenswrapper[5028]: E1123 07:09:50.570760 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c7363e-bafb-4e60-87bb-bb66f77d5943" containerName="glance-db-sync" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.570781 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c7363e-bafb-4e60-87bb-bb66f77d5943" containerName="glance-db-sync" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.571066 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c7363e-bafb-4e60-87bb-bb66f77d5943" containerName="glance-db-sync" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.571598 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.571606 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.575673 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fnpjd" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.575926 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.576093 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.576257 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2jnl7" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.576359 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.581686 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6nfnj"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.581898 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrbf\" (UniqueName: \"kubernetes.io/projected/69d1e57b-89c3-49a4-95dc-537dcedf1c54-kube-api-access-nfrbf\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.581978 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582025 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-svc\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582048 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k2f8\" (UniqueName: \"kubernetes.io/projected/426016a6-a8a2-4817-ba4d-3d1662b15b78-kube-api-access-4k2f8\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582086 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-scripts\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582106 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-config-data\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582135 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-swift-storage-0\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582156 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-nb\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582286 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-log-httpd\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582323 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582447 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-sb\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582491 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-run-httpd\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582568 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-config\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.582772 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-log-httpd\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.583041 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-run-httpd\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.583451 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-nb\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.583664 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-config\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.584426 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-svc\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.584963 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-swift-storage-0\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.585102 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-sb\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.592624 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-scripts\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.593177 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.595162 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.596227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-config-data\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.605361 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fw9rb"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.608415 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrbf\" (UniqueName: \"kubernetes.io/projected/69d1e57b-89c3-49a4-95dc-537dcedf1c54-kube-api-access-nfrbf\") pod \"ceilometer-0\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.616538 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k2f8\" (UniqueName: \"kubernetes.io/projected/426016a6-a8a2-4817-ba4d-3d1662b15b78-kube-api-access-4k2f8\") pod \"dnsmasq-dns-8c7bdb785-zg2hc\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.631673 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c7bdb785-zg2hc"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.645440 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-skc8g"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.646576 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.649249 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.649433 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k48v8" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.649537 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.655113 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-skc8g"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.660009 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c7bdb785-zg2hc"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.660653 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.683376 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-db-sync-config-data\") pod \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.683426 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bklsl\" (UniqueName: \"kubernetes.io/projected/b1c7363e-bafb-4e60-87bb-bb66f77d5943-kube-api-access-bklsl\") pod \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.683580 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-combined-ca-bundle\") pod \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.683658 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-config-data\") pod \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\" (UID: \"b1c7363e-bafb-4e60-87bb-bb66f77d5943\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684009 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-combined-ca-bundle\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684112 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-combined-ca-bundle\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684146 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-config-data\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684167 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs575\" (UniqueName: \"kubernetes.io/projected/df8685a3-3877-4017-b2f4-69474e17a008-kube-api-access-xs575\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684190 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-db-sync-config-data\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684230 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2vz\" (UniqueName: \"kubernetes.io/projected/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-kube-api-access-pj2vz\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684278 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-db-sync-config-data\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684302 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-scripts\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.684327 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-etc-machine-id\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.687553 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c8d5b9fc-4vl2t"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.689062 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.689700 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b1c7363e-bafb-4e60-87bb-bb66f77d5943" (UID: "b1c7363e-bafb-4e60-87bb-bb66f77d5943"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.691179 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c7363e-bafb-4e60-87bb-bb66f77d5943-kube-api-access-bklsl" (OuterVolumeSpecName: "kube-api-access-bklsl") pod "b1c7363e-bafb-4e60-87bb-bb66f77d5943" (UID: "b1c7363e-bafb-4e60-87bb-bb66f77d5943"). InnerVolumeSpecName "kube-api-access-bklsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.706900 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c8d5b9fc-4vl2t"] Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.723333 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1c7363e-bafb-4e60-87bb-bb66f77d5943" (UID: "b1c7363e-bafb-4e60-87bb-bb66f77d5943"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.749399 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-config-data" (OuterVolumeSpecName: "config-data") pod "b1c7363e-bafb-4e60-87bb-bb66f77d5943" (UID: "b1c7363e-bafb-4e60-87bb-bb66f77d5943"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.786913 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-db-sync-config-data\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787047 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-scripts\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787087 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-etc-machine-id\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787121 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787158 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-scripts\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787185 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-config-data\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787219 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787235 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-combined-ca-bundle\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787261 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-combined-ca-bundle\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787281 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-config\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787305 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pglsc\" (UniqueName: \"kubernetes.io/projected/bccfd807-1efe-4af5-b0a2-45752a3774ee-kube-api-access-pglsc\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787343 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-swift-storage-0\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787365 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-combined-ca-bundle\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787386 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-svc\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787404 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdh9\" (UniqueName: \"kubernetes.io/projected/71de438f-0698-43fd-b519-866d5f34c66b-kube-api-access-dpdh9\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787421 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-config-data\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787437 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs575\" (UniqueName: \"kubernetes.io/projected/df8685a3-3877-4017-b2f4-69474e17a008-kube-api-access-xs575\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787453 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-db-sync-config-data\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787469 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccfd807-1efe-4af5-b0a2-45752a3774ee-logs\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787498 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2vz\" (UniqueName: \"kubernetes.io/projected/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-kube-api-access-pj2vz\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787546 5028 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787557 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bklsl\" (UniqueName: \"kubernetes.io/projected/b1c7363e-bafb-4e60-87bb-bb66f77d5943-kube-api-access-bklsl\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787568 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.787576 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1c7363e-bafb-4e60-87bb-bb66f77d5943-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.793215 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-etc-machine-id\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.795650 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-combined-ca-bundle\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.795782 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-config-data\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.797227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-db-sync-config-data\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.798041 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-scripts\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.801062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-db-sync-config-data\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.805708 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2vz\" (UniqueName: \"kubernetes.io/projected/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-kube-api-access-pj2vz\") pod \"cinder-db-sync-6nfnj\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.808405 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-combined-ca-bundle\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.812676 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs575\" (UniqueName: \"kubernetes.io/projected/df8685a3-3877-4017-b2f4-69474e17a008-kube-api-access-xs575\") pod \"barbican-db-sync-fw9rb\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.823118 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.857930 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-swbsc" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.858093 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-swbsc" event={"ID":"b1c7363e-bafb-4e60-87bb-bb66f77d5943","Type":"ContainerDied","Data":"168dedc2fd8436c71202e56af8d2138d0a5c54e4baf551889cf77d3917ff36ec"} Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.858147 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168dedc2fd8436c71202e56af8d2138d0a5c54e4baf551889cf77d3917ff36ec" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.858249 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.893736 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-config-data\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.893975 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894065 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-combined-ca-bundle\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894177 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-config\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894237 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pglsc\" (UniqueName: \"kubernetes.io/projected/bccfd807-1efe-4af5-b0a2-45752a3774ee-kube-api-access-pglsc\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894317 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-swift-storage-0\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894393 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-svc\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894421 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpdh9\" (UniqueName: \"kubernetes.io/projected/71de438f-0698-43fd-b519-866d5f34c66b-kube-api-access-dpdh9\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894468 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccfd807-1efe-4af5-b0a2-45752a3774ee-logs\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.894675 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-scripts\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.895276 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.895549 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-swift-storage-0\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.895864 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-config\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.896230 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccfd807-1efe-4af5-b0a2-45752a3774ee-logs\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.898022 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.899193 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.901626 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-combined-ca-bundle\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.902281 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-config-data\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.903606 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-svc\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.921142 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-scripts\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.924032 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.927749 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pglsc\" (UniqueName: \"kubernetes.io/projected/bccfd807-1efe-4af5-b0a2-45752a3774ee-kube-api-access-pglsc\") pod \"placement-db-sync-skc8g\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.932568 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpdh9\" (UniqueName: \"kubernetes.io/projected/71de438f-0698-43fd-b519-866d5f34c66b-kube-api-access-dpdh9\") pod \"dnsmasq-dns-76c8d5b9fc-4vl2t\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.936380 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.985477 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-skc8g" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.996132 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-nb\") pod \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.996243 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-sb\") pod \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.996354 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-config\") pod \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.996418 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vgmn\" (UniqueName: \"kubernetes.io/projected/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-kube-api-access-5vgmn\") pod \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.996454 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-dns-svc\") pod \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\" (UID: \"a6c1f77f-a062-4a6d-bd97-e9c9d114541f\") " Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.996893 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6c1f77f-a062-4a6d-bd97-e9c9d114541f" (UID: "a6c1f77f-a062-4a6d-bd97-e9c9d114541f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.997042 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6c1f77f-a062-4a6d-bd97-e9c9d114541f" (UID: "a6c1f77f-a062-4a6d-bd97-e9c9d114541f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.997285 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-config" (OuterVolumeSpecName: "config") pod "a6c1f77f-a062-4a6d-bd97-e9c9d114541f" (UID: "a6c1f77f-a062-4a6d-bd97-e9c9d114541f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:50 crc kubenswrapper[5028]: I1123 07:09:50.997578 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6c1f77f-a062-4a6d-bd97-e9c9d114541f" (UID: "a6c1f77f-a062-4a6d-bd97-e9c9d114541f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.005094 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-kube-api-access-5vgmn" (OuterVolumeSpecName: "kube-api-access-5vgmn") pod "a6c1f77f-a062-4a6d-bd97-e9c9d114541f" (UID: "a6c1f77f-a062-4a6d-bd97-e9c9d114541f"). InnerVolumeSpecName "kube-api-access-5vgmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.047900 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.049650 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7hptw"] Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.101793 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.101818 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.101828 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vgmn\" (UniqueName: \"kubernetes.io/projected/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-kube-api-access-5vgmn\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.101837 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.101848 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6c1f77f-a062-4a6d-bd97-e9c9d114541f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.119384 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nszb5"] Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.195694 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8d5b9fc-4vl2t"] Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.269267 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-798745f775-68pr2"] Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.270644 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.291254 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798745f775-68pr2"] Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.321876 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c7bdb785-zg2hc"] Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.405848 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-svc\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.406531 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-swift-storage-0\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.406636 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-config\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.406704 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-sb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.406768 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-nb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.406784 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcvb\" (UniqueName: \"kubernetes.io/projected/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-kube-api-access-6qcvb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.509280 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-svc\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.509392 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-swift-storage-0\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.510315 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-svc\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.510481 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-config\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.511141 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-sb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.511168 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-nb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.511187 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcvb\" (UniqueName: \"kubernetes.io/projected/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-kube-api-access-6qcvb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.511703 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-swift-storage-0\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.512906 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-nb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.511095 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-config\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.513397 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-sb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.528262 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcvb\" (UniqueName: \"kubernetes.io/projected/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-kube-api-access-6qcvb\") pod \"dnsmasq-dns-798745f775-68pr2\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.611533 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.866795 5028 generic.go:334] "Generic (PLEG): container finished" podID="426016a6-a8a2-4817-ba4d-3d1662b15b78" containerID="900a4e02144e4fbb26d8782f40a2f2413c808e47e3cde8bc6cec7b394fa58901" exitCode=0 Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.867244 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" event={"ID":"426016a6-a8a2-4817-ba4d-3d1662b15b78","Type":"ContainerDied","Data":"900a4e02144e4fbb26d8782f40a2f2413c808e47e3cde8bc6cec7b394fa58901"} Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.867270 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" event={"ID":"426016a6-a8a2-4817-ba4d-3d1662b15b78","Type":"ContainerStarted","Data":"795e0977eac05411b75ffc53bfd289413505fb440f4cd7c1ada5ba6ef9eb4c2a"} Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.884335 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nszb5" event={"ID":"52dbb1d3-74ce-40af-a86e-e8d80334d704","Type":"ContainerStarted","Data":"88d19b7b033e2e13685a9d25e3e33cc4ed358eed5376265f13a413a01fc54c3f"} Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.884372 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nszb5" event={"ID":"52dbb1d3-74ce-40af-a86e-e8d80334d704","Type":"ContainerStarted","Data":"917ec3e20532311608e2c40b536480d99b4a14c8a9c79807b546d22d30783554"} Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.913144 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8645678c8c-7d2q2" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.913853 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7hptw" event={"ID":"8373ad19-cd11-4d27-8936-27132ab9bf72","Type":"ContainerStarted","Data":"fb0c136a883e5097dee7d61f52e18b211b1aa04f3990800f6dee2cb5309eb9a9"} Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.913880 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7hptw" event={"ID":"8373ad19-cd11-4d27-8936-27132ab9bf72","Type":"ContainerStarted","Data":"8341340dc6d1740900da38fab0f27a4384c2bfce03b38e26de074c7e7773c1e2"} Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.925524 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nszb5" podStartSLOduration=2.925502779 podStartE2EDuration="2.925502779s" podCreationTimestamp="2025-11-23 07:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:51.917910404 +0000 UTC m=+1175.615315183" watchObservedRunningTime="2025-11-23 07:09:51.925502779 +0000 UTC m=+1175.622907558" Nov 23 07:09:51 crc kubenswrapper[5028]: I1123 07:09:51.958588 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7hptw" podStartSLOduration=1.958568993 podStartE2EDuration="1.958568993s" podCreationTimestamp="2025-11-23 07:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:51.951169633 +0000 UTC m=+1175.648574412" watchObservedRunningTime="2025-11-23 07:09:51.958568993 +0000 UTC m=+1175.655973772" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.031008 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8645678c8c-7d2q2"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.040561 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8645678c8c-7d2q2"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.169786 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.254861 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.265141 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.268345 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.272295 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.272474 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fk5p7" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.272580 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.308493 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.310688 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.314637 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327214 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-scripts\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327312 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327348 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-logs\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327375 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327717 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-config-data\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327848 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqdh8\" (UniqueName: \"kubernetes.io/projected/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-kube-api-access-cqdh8\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.327897 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.333247 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434222 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434358 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434450 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-config-data\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434526 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434555 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434579 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434604 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-logs\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434631 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqdh8\" (UniqueName: \"kubernetes.io/projected/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-kube-api-access-cqdh8\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434659 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434705 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-scripts\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434735 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbpn\" (UniqueName: \"kubernetes.io/projected/ffadc24e-f2e0-41e4-8737-477f1c750399-kube-api-access-tfbpn\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434882 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-logs\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.434903 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.435838 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.440586 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.443467 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-logs\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.444196 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-scripts\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.464186 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.465106 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-config-data\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.477824 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqdh8\" (UniqueName: \"kubernetes.io/projected/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-kube-api-access-cqdh8\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.480227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.526009 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fw9rb"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536579 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbpn\" (UniqueName: \"kubernetes.io/projected/ffadc24e-f2e0-41e4-8737-477f1c750399-kube-api-access-tfbpn\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536855 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536883 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536908 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536924 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.536956 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-logs\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.537403 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-logs\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.537484 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.537539 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8d5b9fc-4vl2t"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.537904 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.545857 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.549581 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.564401 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.572678 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbpn\" (UniqueName: \"kubernetes.io/projected/ffadc24e-f2e0-41e4-8737-477f1c750399-kube-api-access-tfbpn\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.577373 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6nfnj"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.588217 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-798745f775-68pr2"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.596358 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.606187 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-skc8g"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.635430 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.661288 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.743438 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.744403 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.745104 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-nb\") pod \"426016a6-a8a2-4817-ba4d-3d1662b15b78\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.745138 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-config\") pod \"426016a6-a8a2-4817-ba4d-3d1662b15b78\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.745201 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k2f8\" (UniqueName: \"kubernetes.io/projected/426016a6-a8a2-4817-ba4d-3d1662b15b78-kube-api-access-4k2f8\") pod \"426016a6-a8a2-4817-ba4d-3d1662b15b78\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.745235 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-swift-storage-0\") pod \"426016a6-a8a2-4817-ba4d-3d1662b15b78\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.745267 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-svc\") pod \"426016a6-a8a2-4817-ba4d-3d1662b15b78\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.745292 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-sb\") pod \"426016a6-a8a2-4817-ba4d-3d1662b15b78\" (UID: \"426016a6-a8a2-4817-ba4d-3d1662b15b78\") " Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.791124 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426016a6-a8a2-4817-ba4d-3d1662b15b78-kube-api-access-4k2f8" (OuterVolumeSpecName: "kube-api-access-4k2f8") pod "426016a6-a8a2-4817-ba4d-3d1662b15b78" (UID: "426016a6-a8a2-4817-ba4d-3d1662b15b78"). InnerVolumeSpecName "kube-api-access-4k2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.817488 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "426016a6-a8a2-4817-ba4d-3d1662b15b78" (UID: "426016a6-a8a2-4817-ba4d-3d1662b15b78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.845541 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "426016a6-a8a2-4817-ba4d-3d1662b15b78" (UID: "426016a6-a8a2-4817-ba4d-3d1662b15b78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.846673 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.846687 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k2f8\" (UniqueName: \"kubernetes.io/projected/426016a6-a8a2-4817-ba4d-3d1662b15b78-kube-api-access-4k2f8\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.846698 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.900471 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-config" (OuterVolumeSpecName: "config") pod "426016a6-a8a2-4817-ba4d-3d1662b15b78" (UID: "426016a6-a8a2-4817-ba4d-3d1662b15b78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.902201 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "426016a6-a8a2-4817-ba4d-3d1662b15b78" (UID: "426016a6-a8a2-4817-ba4d-3d1662b15b78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.922541 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.955108 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "426016a6-a8a2-4817-ba4d-3d1662b15b78" (UID: "426016a6-a8a2-4817-ba4d-3d1662b15b78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.957189 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.957216 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.957225 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/426016a6-a8a2-4817-ba4d-3d1662b15b78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:52 crc kubenswrapper[5028]: I1123 07:09:52.973573 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.003397 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798745f775-68pr2" event={"ID":"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77","Type":"ContainerStarted","Data":"c784bced4df2d5cba8562ce89ab3bfb288528e1e8de0d2cf71729299cae85829"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.023367 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerStarted","Data":"8a99585857b02d37c82229af855b67933741c43c64e4649e76f0cece2a372699"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.025626 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6nfnj" event={"ID":"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17","Type":"ContainerStarted","Data":"b8d84bb7a25dab6bc1fabc9d0487549dd1af647d805b94076370baebbdb178c0"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.027645 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" event={"ID":"71de438f-0698-43fd-b519-866d5f34c66b","Type":"ContainerStarted","Data":"391709a963d65c4f83ad39e02e3c69f3bc8500214c00abacf21dcc3db26cf103"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.041398 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-skc8g" event={"ID":"bccfd807-1efe-4af5-b0a2-45752a3774ee","Type":"ContainerStarted","Data":"040e6ed442b96c59bb5b8b4367c888a71bcfad080b9dd89e89e1c944648b59c3"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.047385 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.052461 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c7bdb785-zg2hc" event={"ID":"426016a6-a8a2-4817-ba4d-3d1662b15b78","Type":"ContainerDied","Data":"795e0977eac05411b75ffc53bfd289413505fb440f4cd7c1ada5ba6ef9eb4c2a"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.052524 5028 scope.go:117] "RemoveContainer" containerID="900a4e02144e4fbb26d8782f40a2f2413c808e47e3cde8bc6cec7b394fa58901" Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.079712 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c1f77f-a062-4a6d-bd97-e9c9d114541f" path="/var/lib/kubelet/pods/a6c1f77f-a062-4a6d-bd97-e9c9d114541f/volumes" Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.080366 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw9rb" event={"ID":"df8685a3-3877-4017-b2f4-69474e17a008","Type":"ContainerStarted","Data":"68eb8f91bcf71eee666d650e1f2de73c845a808a2200d9a18bc323a5e39c891f"} Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.258020 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8c7bdb785-zg2hc"] Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.258076 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8c7bdb785-zg2hc"] Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.561897 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:09:53 crc kubenswrapper[5028]: E1123 07:09:53.577629 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdcb8ce0_1ddd_463b_b0b8_064b4e30cc77.slice/crio-conmon-b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71de438f_0698_43fd_b519_866d5f34c66b.slice/crio-340b8dc1aeeaeb62d73a5a1bcebe6d3ca540ec6d6c1d250d625fce91caa09283.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:09:53 crc kubenswrapper[5028]: I1123 07:09:53.704103 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:09:53 crc kubenswrapper[5028]: W1123 07:09:53.714413 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffadc24e_f2e0_41e4_8737_477f1c750399.slice/crio-9d7f30a8cd9bfa007fcce2c030db3bb3f169d81b521a8da5ce54c4876ed7bfa4 WatchSource:0}: Error finding container 9d7f30a8cd9bfa007fcce2c030db3bb3f169d81b521a8da5ce54c4876ed7bfa4: Status 404 returned error can't find the container with id 9d7f30a8cd9bfa007fcce2c030db3bb3f169d81b521a8da5ce54c4876ed7bfa4 Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.095476 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12","Type":"ContainerStarted","Data":"2adbafa4484ea3170b3eea6f93e37997d2bb39ab3e6cf0a163316103db268c63"} Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.099327 5028 generic.go:334] "Generic (PLEG): container finished" podID="71de438f-0698-43fd-b519-866d5f34c66b" containerID="340b8dc1aeeaeb62d73a5a1bcebe6d3ca540ec6d6c1d250d625fce91caa09283" exitCode=0 Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.099372 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" event={"ID":"71de438f-0698-43fd-b519-866d5f34c66b","Type":"ContainerDied","Data":"340b8dc1aeeaeb62d73a5a1bcebe6d3ca540ec6d6c1d250d625fce91caa09283"} Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.104316 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ffadc24e-f2e0-41e4-8737-477f1c750399","Type":"ContainerStarted","Data":"9d7f30a8cd9bfa007fcce2c030db3bb3f169d81b521a8da5ce54c4876ed7bfa4"} Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.128069 5028 generic.go:334] "Generic (PLEG): container finished" podID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerID="b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84" exitCode=0 Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.128135 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798745f775-68pr2" event={"ID":"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77","Type":"ContainerDied","Data":"b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84"} Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.517842 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.604202 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-svc\") pod \"71de438f-0698-43fd-b519-866d5f34c66b\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.604254 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpdh9\" (UniqueName: \"kubernetes.io/projected/71de438f-0698-43fd-b519-866d5f34c66b-kube-api-access-dpdh9\") pod \"71de438f-0698-43fd-b519-866d5f34c66b\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.604417 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-nb\") pod \"71de438f-0698-43fd-b519-866d5f34c66b\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.604440 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-config\") pod \"71de438f-0698-43fd-b519-866d5f34c66b\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.605023 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-swift-storage-0\") pod \"71de438f-0698-43fd-b519-866d5f34c66b\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.605368 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-sb\") pod \"71de438f-0698-43fd-b519-866d5f34c66b\" (UID: \"71de438f-0698-43fd-b519-866d5f34c66b\") " Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.610525 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71de438f-0698-43fd-b519-866d5f34c66b-kube-api-access-dpdh9" (OuterVolumeSpecName: "kube-api-access-dpdh9") pod "71de438f-0698-43fd-b519-866d5f34c66b" (UID: "71de438f-0698-43fd-b519-866d5f34c66b"). InnerVolumeSpecName "kube-api-access-dpdh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.634362 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71de438f-0698-43fd-b519-866d5f34c66b" (UID: "71de438f-0698-43fd-b519-866d5f34c66b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.644327 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71de438f-0698-43fd-b519-866d5f34c66b" (UID: "71de438f-0698-43fd-b519-866d5f34c66b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.648190 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71de438f-0698-43fd-b519-866d5f34c66b" (UID: "71de438f-0698-43fd-b519-866d5f34c66b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.655468 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-config" (OuterVolumeSpecName: "config") pod "71de438f-0698-43fd-b519-866d5f34c66b" (UID: "71de438f-0698-43fd-b519-866d5f34c66b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.656370 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "71de438f-0698-43fd-b519-866d5f34c66b" (UID: "71de438f-0698-43fd-b519-866d5f34c66b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.708035 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.708077 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.708086 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.708095 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpdh9\" (UniqueName: \"kubernetes.io/projected/71de438f-0698-43fd-b519-866d5f34c66b-kube-api-access-dpdh9\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.708104 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:54 crc kubenswrapper[5028]: I1123 07:09:54.708112 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71de438f-0698-43fd-b519-866d5f34c66b-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.071585 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="426016a6-a8a2-4817-ba4d-3d1662b15b78" path="/var/lib/kubelet/pods/426016a6-a8a2-4817-ba4d-3d1662b15b78/volumes" Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.149335 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12","Type":"ContainerStarted","Data":"6b18b3e0c4b65adef4d28a0e5277613fdd646ab275f28be82368d549c9fed53f"} Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.158455 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.158493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8d5b9fc-4vl2t" event={"ID":"71de438f-0698-43fd-b519-866d5f34c66b","Type":"ContainerDied","Data":"391709a963d65c4f83ad39e02e3c69f3bc8500214c00abacf21dcc3db26cf103"} Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.158539 5028 scope.go:117] "RemoveContainer" containerID="340b8dc1aeeaeb62d73a5a1bcebe6d3ca540ec6d6c1d250d625fce91caa09283" Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.164379 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ffadc24e-f2e0-41e4-8737-477f1c750399","Type":"ContainerStarted","Data":"889be405f8f51032f045915ff2622173cb02739d7f8604bc66394b734b485889"} Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.172340 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798745f775-68pr2" event={"ID":"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77","Type":"ContainerStarted","Data":"95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913"} Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.173252 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.266544 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8d5b9fc-4vl2t"] Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.276825 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c8d5b9fc-4vl2t"] Nov 23 07:09:55 crc kubenswrapper[5028]: I1123 07:09:55.281021 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-798745f775-68pr2" podStartSLOduration=4.280998802 podStartE2EDuration="4.280998802s" podCreationTimestamp="2025-11-23 07:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:55.229402337 +0000 UTC m=+1178.926807116" watchObservedRunningTime="2025-11-23 07:09:55.280998802 +0000 UTC m=+1178.978403581" Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.181407 5028 generic.go:334] "Generic (PLEG): container finished" podID="52dbb1d3-74ce-40af-a86e-e8d80334d704" containerID="88d19b7b033e2e13685a9d25e3e33cc4ed358eed5376265f13a413a01fc54c3f" exitCode=0 Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.181476 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nszb5" event={"ID":"52dbb1d3-74ce-40af-a86e-e8d80334d704","Type":"ContainerDied","Data":"88d19b7b033e2e13685a9d25e3e33cc4ed358eed5376265f13a413a01fc54c3f"} Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.189528 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ffadc24e-f2e0-41e4-8737-477f1c750399","Type":"ContainerStarted","Data":"f307a29a051fa4fe22d8a5ed6a7ed394d1bbfea16ff83d268d6f41413f02a1e3"} Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.189643 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-log" containerID="cri-o://889be405f8f51032f045915ff2622173cb02739d7f8604bc66394b734b485889" gracePeriod=30 Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.189656 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-httpd" containerID="cri-o://f307a29a051fa4fe22d8a5ed6a7ed394d1bbfea16ff83d268d6f41413f02a1e3" gracePeriod=30 Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.198871 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-log" containerID="cri-o://6b18b3e0c4b65adef4d28a0e5277613fdd646ab275f28be82368d549c9fed53f" gracePeriod=30 Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.199143 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-httpd" containerID="cri-o://0dbb66c92220075adca94240e99bcf5986f518de77b77d203b335cf01d8dd796" gracePeriod=30 Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.199225 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12","Type":"ContainerStarted","Data":"0dbb66c92220075adca94240e99bcf5986f518de77b77d203b335cf01d8dd796"} Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.258367 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.258352455 podStartE2EDuration="5.258352455s" podCreationTimestamp="2025-11-23 07:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:56.228569021 +0000 UTC m=+1179.925973820" watchObservedRunningTime="2025-11-23 07:09:56.258352455 +0000 UTC m=+1179.955757234" Nov 23 07:09:56 crc kubenswrapper[5028]: I1123 07:09:56.260968 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.260942818 podStartE2EDuration="5.260942818s" podCreationTimestamp="2025-11-23 07:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:09:56.249286754 +0000 UTC m=+1179.946691533" watchObservedRunningTime="2025-11-23 07:09:56.260942818 +0000 UTC m=+1179.958347597" Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.067694 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71de438f-0698-43fd-b519-866d5f34c66b" path="/var/lib/kubelet/pods/71de438f-0698-43fd-b519-866d5f34c66b/volumes" Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.220710 5028 generic.go:334] "Generic (PLEG): container finished" podID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerID="f307a29a051fa4fe22d8a5ed6a7ed394d1bbfea16ff83d268d6f41413f02a1e3" exitCode=0 Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.221081 5028 generic.go:334] "Generic (PLEG): container finished" podID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerID="889be405f8f51032f045915ff2622173cb02739d7f8604bc66394b734b485889" exitCode=143 Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.220810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ffadc24e-f2e0-41e4-8737-477f1c750399","Type":"ContainerDied","Data":"f307a29a051fa4fe22d8a5ed6a7ed394d1bbfea16ff83d268d6f41413f02a1e3"} Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.221183 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ffadc24e-f2e0-41e4-8737-477f1c750399","Type":"ContainerDied","Data":"889be405f8f51032f045915ff2622173cb02739d7f8604bc66394b734b485889"} Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.224865 5028 generic.go:334] "Generic (PLEG): container finished" podID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerID="0dbb66c92220075adca94240e99bcf5986f518de77b77d203b335cf01d8dd796" exitCode=0 Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.224892 5028 generic.go:334] "Generic (PLEG): container finished" podID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerID="6b18b3e0c4b65adef4d28a0e5277613fdd646ab275f28be82368d549c9fed53f" exitCode=143 Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.224968 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12","Type":"ContainerDied","Data":"0dbb66c92220075adca94240e99bcf5986f518de77b77d203b335cf01d8dd796"} Nov 23 07:09:57 crc kubenswrapper[5028]: I1123 07:09:57.225033 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12","Type":"ContainerDied","Data":"6b18b3e0c4b65adef4d28a0e5277613fdd646ab275f28be82368d549c9fed53f"} Nov 23 07:10:01 crc kubenswrapper[5028]: I1123 07:10:01.614162 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:10:01 crc kubenswrapper[5028]: I1123 07:10:01.663302 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf8bcbfcf-rbqg4"] Nov 23 07:10:01 crc kubenswrapper[5028]: I1123 07:10:01.663543 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" containerID="cri-o://6f76ca77e083f2b23faba275f2a28f92caf5697e1ac8a778a1cfc5fe208083b9" gracePeriod=10 Nov 23 07:10:02 crc kubenswrapper[5028]: I1123 07:10:02.264924 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Nov 23 07:10:02 crc kubenswrapper[5028]: I1123 07:10:02.272030 5028 generic.go:334] "Generic (PLEG): container finished" podID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerID="6f76ca77e083f2b23faba275f2a28f92caf5697e1ac8a778a1cfc5fe208083b9" exitCode=0 Nov 23 07:10:02 crc kubenswrapper[5028]: I1123 07:10:02.272064 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" event={"ID":"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa","Type":"ContainerDied","Data":"6f76ca77e083f2b23faba275f2a28f92caf5697e1ac8a778a1cfc5fe208083b9"} Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.786237 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.917861 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-scripts\") pod \"52dbb1d3-74ce-40af-a86e-e8d80334d704\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.918006 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-fernet-keys\") pod \"52dbb1d3-74ce-40af-a86e-e8d80334d704\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.918066 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccwwk\" (UniqueName: \"kubernetes.io/projected/52dbb1d3-74ce-40af-a86e-e8d80334d704-kube-api-access-ccwwk\") pod \"52dbb1d3-74ce-40af-a86e-e8d80334d704\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.918144 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-config-data\") pod \"52dbb1d3-74ce-40af-a86e-e8d80334d704\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.918187 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-combined-ca-bundle\") pod \"52dbb1d3-74ce-40af-a86e-e8d80334d704\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.918228 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-credential-keys\") pod \"52dbb1d3-74ce-40af-a86e-e8d80334d704\" (UID: \"52dbb1d3-74ce-40af-a86e-e8d80334d704\") " Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.927252 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "52dbb1d3-74ce-40af-a86e-e8d80334d704" (UID: "52dbb1d3-74ce-40af-a86e-e8d80334d704"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.927305 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-scripts" (OuterVolumeSpecName: "scripts") pod "52dbb1d3-74ce-40af-a86e-e8d80334d704" (UID: "52dbb1d3-74ce-40af-a86e-e8d80334d704"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.927553 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52dbb1d3-74ce-40af-a86e-e8d80334d704-kube-api-access-ccwwk" (OuterVolumeSpecName: "kube-api-access-ccwwk") pod "52dbb1d3-74ce-40af-a86e-e8d80334d704" (UID: "52dbb1d3-74ce-40af-a86e-e8d80334d704"). InnerVolumeSpecName "kube-api-access-ccwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.927737 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "52dbb1d3-74ce-40af-a86e-e8d80334d704" (UID: "52dbb1d3-74ce-40af-a86e-e8d80334d704"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.946569 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52dbb1d3-74ce-40af-a86e-e8d80334d704" (UID: "52dbb1d3-74ce-40af-a86e-e8d80334d704"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:03 crc kubenswrapper[5028]: I1123 07:10:03.952440 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-config-data" (OuterVolumeSpecName: "config-data") pod "52dbb1d3-74ce-40af-a86e-e8d80334d704" (UID: "52dbb1d3-74ce-40af-a86e-e8d80334d704"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.020268 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.020306 5028 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.020316 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.020327 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.020335 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccwwk\" (UniqueName: \"kubernetes.io/projected/52dbb1d3-74ce-40af-a86e-e8d80334d704-kube-api-access-ccwwk\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.020346 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52dbb1d3-74ce-40af-a86e-e8d80334d704-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.289555 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nszb5" event={"ID":"52dbb1d3-74ce-40af-a86e-e8d80334d704","Type":"ContainerDied","Data":"917ec3e20532311608e2c40b536480d99b4a14c8a9c79807b546d22d30783554"} Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.289591 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="917ec3e20532311608e2c40b536480d99b4a14c8a9c79807b546d22d30783554" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.289598 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nszb5" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.878459 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nszb5"] Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.885511 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nszb5"] Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.965843 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xf95z"] Nov 23 07:10:04 crc kubenswrapper[5028]: E1123 07:10:04.966225 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52dbb1d3-74ce-40af-a86e-e8d80334d704" containerName="keystone-bootstrap" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.966244 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="52dbb1d3-74ce-40af-a86e-e8d80334d704" containerName="keystone-bootstrap" Nov 23 07:10:04 crc kubenswrapper[5028]: E1123 07:10:04.966268 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71de438f-0698-43fd-b519-866d5f34c66b" containerName="init" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.966275 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="71de438f-0698-43fd-b519-866d5f34c66b" containerName="init" Nov 23 07:10:04 crc kubenswrapper[5028]: E1123 07:10:04.966288 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="426016a6-a8a2-4817-ba4d-3d1662b15b78" containerName="init" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.966309 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="426016a6-a8a2-4817-ba4d-3d1662b15b78" containerName="init" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.966498 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="52dbb1d3-74ce-40af-a86e-e8d80334d704" containerName="keystone-bootstrap" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.966514 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="426016a6-a8a2-4817-ba4d-3d1662b15b78" containerName="init" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.966525 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="71de438f-0698-43fd-b519-866d5f34c66b" containerName="init" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.967173 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.972399 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.972664 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.972819 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.973104 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-22nmw" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.973273 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 07:10:04 crc kubenswrapper[5028]: I1123 07:10:04.981437 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xf95z"] Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.063813 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52dbb1d3-74ce-40af-a86e-e8d80334d704" path="/var/lib/kubelet/pods/52dbb1d3-74ce-40af-a86e-e8d80334d704/volumes" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.138555 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-config-data\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.138800 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-combined-ca-bundle\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.138890 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-scripts\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.139102 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-credential-keys\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.140374 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmgv\" (UniqueName: \"kubernetes.io/projected/b51c114c-7132-45e8-9e6e-ec0c783ede0f-kube-api-access-fgmgv\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.140591 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-fernet-keys\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.242013 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-fernet-keys\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.242086 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-config-data\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.242145 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-combined-ca-bundle\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.242181 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-scripts\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.242242 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-credential-keys\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.242283 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgmgv\" (UniqueName: \"kubernetes.io/projected/b51c114c-7132-45e8-9e6e-ec0c783ede0f-kube-api-access-fgmgv\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.246338 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-scripts\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.252589 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-config-data\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.257369 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgmgv\" (UniqueName: \"kubernetes.io/projected/b51c114c-7132-45e8-9e6e-ec0c783ede0f-kube-api-access-fgmgv\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.257602 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-fernet-keys\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.258222 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-combined-ca-bundle\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.258922 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-credential-keys\") pod \"keystone-bootstrap-xf95z\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:05 crc kubenswrapper[5028]: I1123 07:10:05.296488 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:07 crc kubenswrapper[5028]: I1123 07:10:07.264587 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Nov 23 07:10:07 crc kubenswrapper[5028]: E1123 07:10:07.712175 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140" Nov 23 07:10:07 crc kubenswrapper[5028]: E1123 07:10:07.712622 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69hf7h56dhch675h68ch544h596h5cch88hcbh597h68ch596h585hc8h684h54dh688hfch5d9hbfh5c7h67h5c8h68h577hfch666hfh669h647q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nfrbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(69d1e57b-89c3-49a4-95dc-537dcedf1c54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.198079 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.206153 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.215835 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.329767 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-dns-svc\") pod \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.329842 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-combined-ca-bundle\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.329876 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-nb\") pod \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.329910 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-scripts\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.329942 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-httpd-run\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.329980 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfbpn\" (UniqueName: \"kubernetes.io/projected/ffadc24e-f2e0-41e4-8737-477f1c750399-kube-api-access-tfbpn\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330002 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-combined-ca-bundle\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330034 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-logs\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330067 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330137 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330164 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-logs\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330190 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-httpd-run\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330230 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-config-data\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330263 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqdh8\" (UniqueName: \"kubernetes.io/projected/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-kube-api-access-cqdh8\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330295 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx4mp\" (UniqueName: \"kubernetes.io/projected/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-kube-api-access-nx4mp\") pod \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330317 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-sb\") pod \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330342 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-config\") pod \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\" (UID: \"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330359 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-config-data\") pod \"ffadc24e-f2e0-41e4-8737-477f1c750399\" (UID: \"ffadc24e-f2e0-41e4-8737-477f1c750399\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330375 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-scripts\") pod \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\" (UID: \"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12\") " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330413 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.330691 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.335789 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-kube-api-access-cqdh8" (OuterVolumeSpecName: "kube-api-access-cqdh8") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "kube-api-access-cqdh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.336066 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-logs" (OuterVolumeSpecName: "logs") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.336297 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.336863 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffadc24e-f2e0-41e4-8737-477f1c750399-kube-api-access-tfbpn" (OuterVolumeSpecName: "kube-api-access-tfbpn") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "kube-api-access-tfbpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.337059 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-logs" (OuterVolumeSpecName: "logs") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.337283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-scripts" (OuterVolumeSpecName: "scripts") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.337869 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.338531 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-kube-api-access-nx4mp" (OuterVolumeSpecName: "kube-api-access-nx4mp") pod "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" (UID: "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa"). InnerVolumeSpecName "kube-api-access-nx4mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.339394 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-scripts" (OuterVolumeSpecName: "scripts") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.340144 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.364554 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.377228 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-config" (OuterVolumeSpecName: "config") pod "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" (UID: "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.382368 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.390346 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" (UID: "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.393456 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" (UID: "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.398633 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" event={"ID":"4d975fcb-e81a-4979-bd86-7d0f03c7d6fa","Type":"ContainerDied","Data":"59c70017e6a2c3993cda7e0a65edb55915f5141b149bf58e68093c0a2c83e570"} Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.398663 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.398704 5028 scope.go:117] "RemoveContainer" containerID="6f76ca77e083f2b23faba275f2a28f92caf5697e1ac8a778a1cfc5fe208083b9" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.402319 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.402313 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ffadc24e-f2e0-41e4-8737-477f1c750399","Type":"ContainerDied","Data":"9d7f30a8cd9bfa007fcce2c030db3bb3f169d81b521a8da5ce54c4876ed7bfa4"} Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.404654 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3da74b5d-141e-4b4f-a94c-f5feb0bb8b12","Type":"ContainerDied","Data":"2adbafa4484ea3170b3eea6f93e37997d2bb39ab3e6cf0a163316103db268c63"} Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.404694 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.409762 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-config-data" (OuterVolumeSpecName: "config-data") pod "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" (UID: "3da74b5d-141e-4b4f-a94c-f5feb0bb8b12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.412573 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" (UID: "4d975fcb-e81a-4979-bd86-7d0f03c7d6fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.415359 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-config-data" (OuterVolumeSpecName: "config-data") pod "ffadc24e-f2e0-41e4-8737-477f1c750399" (UID: "ffadc24e-f2e0-41e4-8737-477f1c750399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432077 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432108 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqdh8\" (UniqueName: \"kubernetes.io/projected/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-kube-api-access-cqdh8\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432121 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx4mp\" (UniqueName: \"kubernetes.io/projected/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-kube-api-access-nx4mp\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432130 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432140 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432148 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432156 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432164 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432172 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432180 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432187 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432195 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfbpn\" (UniqueName: \"kubernetes.io/projected/ffadc24e-f2e0-41e4-8737-477f1c750399-kube-api-access-tfbpn\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432203 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffadc24e-f2e0-41e4-8737-477f1c750399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432210 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432238 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432251 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432261 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.432269 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ffadc24e-f2e0-41e4-8737-477f1c750399-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.448635 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.449545 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.533579 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.533610 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.741321 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf8bcbfcf-rbqg4"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.751113 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf8bcbfcf-rbqg4"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.764186 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.777218 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.793726 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.804522 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.812002 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: E1123 07:10:16.813076 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813097 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" Nov 23 07:10:16 crc kubenswrapper[5028]: E1123 07:10:16.813136 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="init" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813145 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="init" Nov 23 07:10:16 crc kubenswrapper[5028]: E1123 07:10:16.813155 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-log" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813161 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-log" Nov 23 07:10:16 crc kubenswrapper[5028]: E1123 07:10:16.813168 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-log" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813174 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-log" Nov 23 07:10:16 crc kubenswrapper[5028]: E1123 07:10:16.813207 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-httpd" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813216 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-httpd" Nov 23 07:10:16 crc kubenswrapper[5028]: E1123 07:10:16.813229 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-httpd" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813235 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-httpd" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813461 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-log" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813474 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813483 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-httpd" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813519 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" containerName="glance-log" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.813529 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" containerName="glance-httpd" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.814966 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.819552 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.821032 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.821553 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.821615 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fk5p7" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.821786 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.823312 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.826168 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.828319 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.831164 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.844444 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939697 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939748 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fds\" (UniqueName: \"kubernetes.io/projected/7f520978-76fa-4bde-80df-dff8d693eb23-kube-api-access-88fds\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939783 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-logs\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939850 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhhdf\" (UniqueName: \"kubernetes.io/projected/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-kube-api-access-zhhdf\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939876 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-scripts\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939905 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.939928 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940060 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940157 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940222 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940266 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940287 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940396 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-config-data\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940411 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940453 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:16 crc kubenswrapper[5028]: I1123 07:10:16.940525 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-logs\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042170 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88fds\" (UniqueName: \"kubernetes.io/projected/7f520978-76fa-4bde-80df-dff8d693eb23-kube-api-access-88fds\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042556 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042595 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-logs\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042634 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhhdf\" (UniqueName: \"kubernetes.io/projected/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-kube-api-access-zhhdf\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042655 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-scripts\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042684 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042704 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042740 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042785 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042821 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042851 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042872 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042969 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-config-data\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.042992 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.043027 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.043071 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-logs\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.043195 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-logs\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.043596 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.043727 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-logs\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.043827 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.046077 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.048450 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.048936 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.049177 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.052434 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.056190 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.056399 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.063661 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-config-data\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.064672 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-scripts\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.066644 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.067697 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.068254 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.072430 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.074810 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88fds\" (UniqueName: \"kubernetes.io/projected/7f520978-76fa-4bde-80df-dff8d693eb23-kube-api-access-88fds\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.075042 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.078932 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.082632 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da74b5d-141e-4b4f-a94c-f5feb0bb8b12" path="/var/lib/kubelet/pods/3da74b5d-141e-4b4f-a94c-f5feb0bb8b12/volumes" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.083558 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhhdf\" (UniqueName: \"kubernetes.io/projected/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-kube-api-access-zhhdf\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.084182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.088278 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" path="/var/lib/kubelet/pods/4d975fcb-e81a-4979-bd86-7d0f03c7d6fa/volumes" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.089066 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffadc24e-f2e0-41e4-8737-477f1c750399" path="/var/lib/kubelet/pods/ffadc24e-f2e0-41e4-8737-477f1c750399/volumes" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.103173 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.144941 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fk5p7" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.149831 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.153800 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.265378 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf8bcbfcf-rbqg4" podUID="4d975fcb-e81a-4979-bd86-7d0f03c7d6fa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: i/o timeout" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.548867 5028 scope.go:117] "RemoveContainer" containerID="7a7ce5c4354970c819b37f23fbb63c64188e276738d60c06d2cb418c510e273c" Nov 23 07:10:17 crc kubenswrapper[5028]: E1123 07:10:17.575338 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 23 07:10:17 crc kubenswrapper[5028]: E1123 07:10:17.575530 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj2vz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6nfnj_openstack(4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 07:10:17 crc kubenswrapper[5028]: E1123 07:10:17.576960 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6nfnj" podUID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.941526 5028 scope.go:117] "RemoveContainer" containerID="f307a29a051fa4fe22d8a5ed6a7ed394d1bbfea16ff83d268d6f41413f02a1e3" Nov 23 07:10:17 crc kubenswrapper[5028]: I1123 07:10:17.997987 5028 scope.go:117] "RemoveContainer" containerID="889be405f8f51032f045915ff2622173cb02739d7f8604bc66394b734b485889" Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.085484 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xf95z"] Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.103822 5028 scope.go:117] "RemoveContainer" containerID="0dbb66c92220075adca94240e99bcf5986f518de77b77d203b335cf01d8dd796" Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.132359 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.139677 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.305155 5028 scope.go:117] "RemoveContainer" containerID="6b18b3e0c4b65adef4d28a0e5277613fdd646ab275f28be82368d549c9fed53f" Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.424679 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37","Type":"ContainerStarted","Data":"bc209808a3b9b76c9e40e464bda550080bf436c8be8b220d6841516b422e560f"} Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.427350 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xf95z" event={"ID":"b51c114c-7132-45e8-9e6e-ec0c783ede0f","Type":"ContainerStarted","Data":"94fa8c5b719a3c369b87da39931aa383d2e1b4d527586259de63e4329fe56a49"} Nov 23 07:10:18 crc kubenswrapper[5028]: E1123 07:10:18.437816 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-6nfnj" podUID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" Nov 23 07:10:18 crc kubenswrapper[5028]: I1123 07:10:18.483728 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:10:18 crc kubenswrapper[5028]: W1123 07:10:18.509474 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f520978_76fa_4bde_80df_dff8d693eb23.slice/crio-9722f5e92d3e38d3b40b17305614d252b46962c527f4050e1ace2d1588123fa6 WatchSource:0}: Error finding container 9722f5e92d3e38d3b40b17305614d252b46962c527f4050e1ace2d1588123fa6: Status 404 returned error can't find the container with id 9722f5e92d3e38d3b40b17305614d252b46962c527f4050e1ace2d1588123fa6 Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.450325 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-skc8g" event={"ID":"bccfd807-1efe-4af5-b0a2-45752a3774ee","Type":"ContainerStarted","Data":"4b4b6cd6d812dd10bca7b5bf605672ef58794edc00c2fe75180902fd552e0d18"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.453408 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37","Type":"ContainerStarted","Data":"ff5aef9adabfdd605326740c8d8a830c5c23f6cca3761f541107ce03bf8b3745"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.453447 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37","Type":"ContainerStarted","Data":"77fa97d7bf7f52ef3b1c18b87dc33585d5409dbe348234c57d70fa538bb5a744"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.455657 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xf95z" event={"ID":"b51c114c-7132-45e8-9e6e-ec0c783ede0f","Type":"ContainerStarted","Data":"08d729c3c5943787d8d97cf784800554db6bc9ac7a28eebd0fbec742d8db6711"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.457529 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw9rb" event={"ID":"df8685a3-3877-4017-b2f4-69474e17a008","Type":"ContainerStarted","Data":"9023b2e6e2d8f4408363912b73a54f2f4c4dd8f5701bb322b8691b1effd16944"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.459703 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerStarted","Data":"3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.462360 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f520978-76fa-4bde-80df-dff8d693eb23","Type":"ContainerStarted","Data":"be508a2dec08b87a3f5be29c9ff855aaffe2608171428f79c6f3f221f46cb9f6"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.462395 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f520978-76fa-4bde-80df-dff8d693eb23","Type":"ContainerStarted","Data":"9722f5e92d3e38d3b40b17305614d252b46962c527f4050e1ace2d1588123fa6"} Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.471964 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-skc8g" podStartSLOduration=4.607936218 podStartE2EDuration="29.471935306s" podCreationTimestamp="2025-11-23 07:09:50 +0000 UTC" firstStartedPulling="2025-11-23 07:09:52.704049028 +0000 UTC m=+1176.401453807" lastFinishedPulling="2025-11-23 07:10:17.568048116 +0000 UTC m=+1201.265452895" observedRunningTime="2025-11-23 07:10:19.464626639 +0000 UTC m=+1203.162031428" watchObservedRunningTime="2025-11-23 07:10:19.471935306 +0000 UTC m=+1203.169340085" Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.504284 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.504260702 podStartE2EDuration="3.504260702s" podCreationTimestamp="2025-11-23 07:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:19.493987653 +0000 UTC m=+1203.191392452" watchObservedRunningTime="2025-11-23 07:10:19.504260702 +0000 UTC m=+1203.201665481" Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.517195 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xf95z" podStartSLOduration=15.517177866 podStartE2EDuration="15.517177866s" podCreationTimestamp="2025-11-23 07:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:19.508122396 +0000 UTC m=+1203.205527195" watchObservedRunningTime="2025-11-23 07:10:19.517177866 +0000 UTC m=+1203.214582645" Nov 23 07:10:19 crc kubenswrapper[5028]: I1123 07:10:19.528293 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fw9rb" podStartSLOduration=4.507308772 podStartE2EDuration="29.528276216s" podCreationTimestamp="2025-11-23 07:09:50 +0000 UTC" firstStartedPulling="2025-11-23 07:09:52.54618978 +0000 UTC m=+1176.243594559" lastFinishedPulling="2025-11-23 07:10:17.567157234 +0000 UTC m=+1201.264562003" observedRunningTime="2025-11-23 07:10:19.52431091 +0000 UTC m=+1203.221715689" watchObservedRunningTime="2025-11-23 07:10:19.528276216 +0000 UTC m=+1203.225680995" Nov 23 07:10:20 crc kubenswrapper[5028]: I1123 07:10:20.478159 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f520978-76fa-4bde-80df-dff8d693eb23","Type":"ContainerStarted","Data":"69c8177af17d342a89729e9de14e1a959c4d823fa304d91c39b37495ccdcceae"} Nov 23 07:10:20 crc kubenswrapper[5028]: I1123 07:10:20.510762 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.510741613 podStartE2EDuration="4.510741613s" podCreationTimestamp="2025-11-23 07:10:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:20.501906448 +0000 UTC m=+1204.199311237" watchObservedRunningTime="2025-11-23 07:10:20.510741613 +0000 UTC m=+1204.208146402" Nov 23 07:10:26 crc kubenswrapper[5028]: I1123 07:10:26.536379 5028 generic.go:334] "Generic (PLEG): container finished" podID="bccfd807-1efe-4af5-b0a2-45752a3774ee" containerID="4b4b6cd6d812dd10bca7b5bf605672ef58794edc00c2fe75180902fd552e0d18" exitCode=0 Nov 23 07:10:26 crc kubenswrapper[5028]: I1123 07:10:26.536475 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-skc8g" event={"ID":"bccfd807-1efe-4af5-b0a2-45752a3774ee","Type":"ContainerDied","Data":"4b4b6cd6d812dd10bca7b5bf605672ef58794edc00c2fe75180902fd552e0d18"} Nov 23 07:10:26 crc kubenswrapper[5028]: I1123 07:10:26.539062 5028 generic.go:334] "Generic (PLEG): container finished" podID="b51c114c-7132-45e8-9e6e-ec0c783ede0f" containerID="08d729c3c5943787d8d97cf784800554db6bc9ac7a28eebd0fbec742d8db6711" exitCode=0 Nov 23 07:10:26 crc kubenswrapper[5028]: I1123 07:10:26.539163 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xf95z" event={"ID":"b51c114c-7132-45e8-9e6e-ec0c783ede0f","Type":"ContainerDied","Data":"08d729c3c5943787d8d97cf784800554db6bc9ac7a28eebd0fbec742d8db6711"} Nov 23 07:10:26 crc kubenswrapper[5028]: I1123 07:10:26.541161 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerStarted","Data":"31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1"} Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.150507 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.150584 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.154260 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.154309 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.185015 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.186842 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.189585 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.195523 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.552072 5028 generic.go:334] "Generic (PLEG): container finished" podID="df8685a3-3877-4017-b2f4-69474e17a008" containerID="9023b2e6e2d8f4408363912b73a54f2f4c4dd8f5701bb322b8691b1effd16944" exitCode=0 Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.552195 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw9rb" event={"ID":"df8685a3-3877-4017-b2f4-69474e17a008","Type":"ContainerDied","Data":"9023b2e6e2d8f4408363912b73a54f2f4c4dd8f5701bb322b8691b1effd16944"} Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.554825 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.554848 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.554858 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:27 crc kubenswrapper[5028]: I1123 07:10:27.554876 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.126014 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-skc8g" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.142114 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174661 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-combined-ca-bundle\") pod \"bccfd807-1efe-4af5-b0a2-45752a3774ee\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174729 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-combined-ca-bundle\") pod \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174800 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pglsc\" (UniqueName: \"kubernetes.io/projected/bccfd807-1efe-4af5-b0a2-45752a3774ee-kube-api-access-pglsc\") pod \"bccfd807-1efe-4af5-b0a2-45752a3774ee\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174818 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-fernet-keys\") pod \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174879 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-scripts\") pod \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174914 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-scripts\") pod \"bccfd807-1efe-4af5-b0a2-45752a3774ee\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.174989 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgmgv\" (UniqueName: \"kubernetes.io/projected/b51c114c-7132-45e8-9e6e-ec0c783ede0f-kube-api-access-fgmgv\") pod \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.175013 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccfd807-1efe-4af5-b0a2-45752a3774ee-logs\") pod \"bccfd807-1efe-4af5-b0a2-45752a3774ee\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.175054 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-credential-keys\") pod \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.175080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-config-data\") pod \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\" (UID: \"b51c114c-7132-45e8-9e6e-ec0c783ede0f\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.175105 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-config-data\") pod \"bccfd807-1efe-4af5-b0a2-45752a3774ee\" (UID: \"bccfd807-1efe-4af5-b0a2-45752a3774ee\") " Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.175686 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccfd807-1efe-4af5-b0a2-45752a3774ee-logs" (OuterVolumeSpecName: "logs") pod "bccfd807-1efe-4af5-b0a2-45752a3774ee" (UID: "bccfd807-1efe-4af5-b0a2-45752a3774ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.182483 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51c114c-7132-45e8-9e6e-ec0c783ede0f-kube-api-access-fgmgv" (OuterVolumeSpecName: "kube-api-access-fgmgv") pod "b51c114c-7132-45e8-9e6e-ec0c783ede0f" (UID: "b51c114c-7132-45e8-9e6e-ec0c783ede0f"). InnerVolumeSpecName "kube-api-access-fgmgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.194183 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b51c114c-7132-45e8-9e6e-ec0c783ede0f" (UID: "b51c114c-7132-45e8-9e6e-ec0c783ede0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.194220 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-scripts" (OuterVolumeSpecName: "scripts") pod "b51c114c-7132-45e8-9e6e-ec0c783ede0f" (UID: "b51c114c-7132-45e8-9e6e-ec0c783ede0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.197391 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-scripts" (OuterVolumeSpecName: "scripts") pod "bccfd807-1efe-4af5-b0a2-45752a3774ee" (UID: "bccfd807-1efe-4af5-b0a2-45752a3774ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.231091 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b51c114c-7132-45e8-9e6e-ec0c783ede0f" (UID: "b51c114c-7132-45e8-9e6e-ec0c783ede0f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.238174 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccfd807-1efe-4af5-b0a2-45752a3774ee-kube-api-access-pglsc" (OuterVolumeSpecName: "kube-api-access-pglsc") pod "bccfd807-1efe-4af5-b0a2-45752a3774ee" (UID: "bccfd807-1efe-4af5-b0a2-45752a3774ee"). InnerVolumeSpecName "kube-api-access-pglsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.239366 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b51c114c-7132-45e8-9e6e-ec0c783ede0f" (UID: "b51c114c-7132-45e8-9e6e-ec0c783ede0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.239890 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-config-data" (OuterVolumeSpecName: "config-data") pod "bccfd807-1efe-4af5-b0a2-45752a3774ee" (UID: "bccfd807-1efe-4af5-b0a2-45752a3774ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.248898 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-config-data" (OuterVolumeSpecName: "config-data") pod "b51c114c-7132-45e8-9e6e-ec0c783ede0f" (UID: "b51c114c-7132-45e8-9e6e-ec0c783ede0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.262216 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bccfd807-1efe-4af5-b0a2-45752a3774ee" (UID: "bccfd807-1efe-4af5-b0a2-45752a3774ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276532 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276556 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276566 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgmgv\" (UniqueName: \"kubernetes.io/projected/b51c114c-7132-45e8-9e6e-ec0c783ede0f-kube-api-access-fgmgv\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276579 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bccfd807-1efe-4af5-b0a2-45752a3774ee-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276588 5028 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276597 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276605 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276613 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccfd807-1efe-4af5-b0a2-45752a3774ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276621 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276628 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pglsc\" (UniqueName: \"kubernetes.io/projected/bccfd807-1efe-4af5-b0a2-45752a3774ee-kube-api-access-pglsc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.276636 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b51c114c-7132-45e8-9e6e-ec0c783ede0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.565800 5028 generic.go:334] "Generic (PLEG): container finished" podID="8373ad19-cd11-4d27-8936-27132ab9bf72" containerID="fb0c136a883e5097dee7d61f52e18b211b1aa04f3990800f6dee2cb5309eb9a9" exitCode=0 Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.565867 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7hptw" event={"ID":"8373ad19-cd11-4d27-8936-27132ab9bf72","Type":"ContainerDied","Data":"fb0c136a883e5097dee7d61f52e18b211b1aa04f3990800f6dee2cb5309eb9a9"} Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.568343 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-skc8g" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.568402 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-skc8g" event={"ID":"bccfd807-1efe-4af5-b0a2-45752a3774ee","Type":"ContainerDied","Data":"040e6ed442b96c59bb5b8b4367c888a71bcfad080b9dd89e89e1c944648b59c3"} Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.568443 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="040e6ed442b96c59bb5b8b4367c888a71bcfad080b9dd89e89e1c944648b59c3" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.597317 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xf95z" event={"ID":"b51c114c-7132-45e8-9e6e-ec0c783ede0f","Type":"ContainerDied","Data":"94fa8c5b719a3c369b87da39931aa383d2e1b4d527586259de63e4329fe56a49"} Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.597578 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94fa8c5b719a3c369b87da39931aa383d2e1b4d527586259de63e4329fe56a49" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.597641 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xf95z" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.655744 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b4c494dd6-rn255"] Nov 23 07:10:28 crc kubenswrapper[5028]: E1123 07:10:28.656158 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51c114c-7132-45e8-9e6e-ec0c783ede0f" containerName="keystone-bootstrap" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.656175 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51c114c-7132-45e8-9e6e-ec0c783ede0f" containerName="keystone-bootstrap" Nov 23 07:10:28 crc kubenswrapper[5028]: E1123 07:10:28.656191 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccfd807-1efe-4af5-b0a2-45752a3774ee" containerName="placement-db-sync" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.656198 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccfd807-1efe-4af5-b0a2-45752a3774ee" containerName="placement-db-sync" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.656482 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccfd807-1efe-4af5-b0a2-45752a3774ee" containerName="placement-db-sync" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.656502 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51c114c-7132-45e8-9e6e-ec0c783ede0f" containerName="keystone-bootstrap" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.657553 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.661416 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.661435 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k48v8" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.661606 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.661700 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.661895 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.674560 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b4c494dd6-rn255"] Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.768426 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79f64857b-ngrdb"] Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.769849 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.777317 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.777603 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.777719 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.777874 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-22nmw" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.778023 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.778157 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783090 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-combined-ca-bundle\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783322 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-scripts\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783342 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-internal-tls-certs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783479 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a3fd963-7f73-4069-a993-1dfec6751c57-logs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783611 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-public-tls-certs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783660 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-config-data\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.783781 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh7x9\" (UniqueName: \"kubernetes.io/projected/2a3fd963-7f73-4069-a993-1dfec6751c57-kube-api-access-dh7x9\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.792161 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79f64857b-ngrdb"] Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885214 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-credential-keys\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885253 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-fernet-keys\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885290 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-scripts\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-internal-tls-certs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885329 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-public-tls-certs\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885352 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a3fd963-7f73-4069-a993-1dfec6751c57-logs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885384 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-public-tls-certs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885404 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-config-data\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885431 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-combined-ca-bundle\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885451 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-scripts\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885471 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-internal-tls-certs\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885490 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-config-data\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885546 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh7x9\" (UniqueName: \"kubernetes.io/projected/2a3fd963-7f73-4069-a993-1dfec6751c57-kube-api-access-dh7x9\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885580 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2bxf\" (UniqueName: \"kubernetes.io/projected/f5f67379-72d4-46f4-844f-b00c8f912169-kube-api-access-l2bxf\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.885605 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-combined-ca-bundle\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.888329 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a3fd963-7f73-4069-a993-1dfec6751c57-logs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.893819 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-config-data\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.893833 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-combined-ca-bundle\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.894009 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-public-tls-certs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.894008 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-scripts\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.895920 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-internal-tls-certs\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.909634 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh7x9\" (UniqueName: \"kubernetes.io/projected/2a3fd963-7f73-4069-a993-1dfec6751c57-kube-api-access-dh7x9\") pod \"placement-6b4c494dd6-rn255\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.984915 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987510 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-config-data\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987553 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-combined-ca-bundle\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987577 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-scripts\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987603 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-internal-tls-certs\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987648 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2bxf\" (UniqueName: \"kubernetes.io/projected/f5f67379-72d4-46f4-844f-b00c8f912169-kube-api-access-l2bxf\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987698 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-credential-keys\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987713 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-fernet-keys\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.987743 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-public-tls-certs\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.991024 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-config-data\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.996728 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-scripts\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.997624 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-public-tls-certs\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.997782 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-credential-keys\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:28 crc kubenswrapper[5028]: I1123 07:10:28.998077 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-fernet-keys\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.003059 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-combined-ca-bundle\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.003672 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-internal-tls-certs\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.003749 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2bxf\" (UniqueName: \"kubernetes.io/projected/f5f67379-72d4-46f4-844f-b00c8f912169-kube-api-access-l2bxf\") pod \"keystone-79f64857b-ngrdb\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.114452 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.594679 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.606777 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.694982 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.695122 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.835116 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:10:29 crc kubenswrapper[5028]: I1123 07:10:29.936283 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:10:30 crc kubenswrapper[5028]: I1123 07:10:30.946511 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:10:30 crc kubenswrapper[5028]: I1123 07:10:30.946779 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.632316 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7hptw" event={"ID":"8373ad19-cd11-4d27-8936-27132ab9bf72","Type":"ContainerDied","Data":"8341340dc6d1740900da38fab0f27a4384c2bfce03b38e26de074c7e7773c1e2"} Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.632630 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8341340dc6d1740900da38fab0f27a4384c2bfce03b38e26de074c7e7773c1e2" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.633977 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw9rb" event={"ID":"df8685a3-3877-4017-b2f4-69474e17a008","Type":"ContainerDied","Data":"68eb8f91bcf71eee666d650e1f2de73c845a808a2200d9a18bc323a5e39c891f"} Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.634001 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68eb8f91bcf71eee666d650e1f2de73c845a808a2200d9a18bc323a5e39c891f" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.682133 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.690102 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7hptw" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.790320 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-combined-ca-bundle\") pod \"8373ad19-cd11-4d27-8936-27132ab9bf72\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.790452 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs575\" (UniqueName: \"kubernetes.io/projected/df8685a3-3877-4017-b2f4-69474e17a008-kube-api-access-xs575\") pod \"df8685a3-3877-4017-b2f4-69474e17a008\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.790504 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-db-sync-config-data\") pod \"df8685a3-3877-4017-b2f4-69474e17a008\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.790577 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-config\") pod \"8373ad19-cd11-4d27-8936-27132ab9bf72\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.790623 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-combined-ca-bundle\") pod \"df8685a3-3877-4017-b2f4-69474e17a008\" (UID: \"df8685a3-3877-4017-b2f4-69474e17a008\") " Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.790678 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w6hm\" (UniqueName: \"kubernetes.io/projected/8373ad19-cd11-4d27-8936-27132ab9bf72-kube-api-access-8w6hm\") pod \"8373ad19-cd11-4d27-8936-27132ab9bf72\" (UID: \"8373ad19-cd11-4d27-8936-27132ab9bf72\") " Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.796770 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "df8685a3-3877-4017-b2f4-69474e17a008" (UID: "df8685a3-3877-4017-b2f4-69474e17a008"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.797238 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8685a3-3877-4017-b2f4-69474e17a008-kube-api-access-xs575" (OuterVolumeSpecName: "kube-api-access-xs575") pod "df8685a3-3877-4017-b2f4-69474e17a008" (UID: "df8685a3-3877-4017-b2f4-69474e17a008"). InnerVolumeSpecName "kube-api-access-xs575". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.797544 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8373ad19-cd11-4d27-8936-27132ab9bf72-kube-api-access-8w6hm" (OuterVolumeSpecName: "kube-api-access-8w6hm") pod "8373ad19-cd11-4d27-8936-27132ab9bf72" (UID: "8373ad19-cd11-4d27-8936-27132ab9bf72"). InnerVolumeSpecName "kube-api-access-8w6hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.817354 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-config" (OuterVolumeSpecName: "config") pod "8373ad19-cd11-4d27-8936-27132ab9bf72" (UID: "8373ad19-cd11-4d27-8936-27132ab9bf72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.819454 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8373ad19-cd11-4d27-8936-27132ab9bf72" (UID: "8373ad19-cd11-4d27-8936-27132ab9bf72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.824994 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df8685a3-3877-4017-b2f4-69474e17a008" (UID: "df8685a3-3877-4017-b2f4-69474e17a008"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.895350 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.895425 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs575\" (UniqueName: \"kubernetes.io/projected/df8685a3-3877-4017-b2f4-69474e17a008-kube-api-access-xs575\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.895445 5028 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.895465 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8373ad19-cd11-4d27-8936-27132ab9bf72-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.895478 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8685a3-3877-4017-b2f4-69474e17a008-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:32 crc kubenswrapper[5028]: I1123 07:10:32.895519 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w6hm\" (UniqueName: \"kubernetes.io/projected/8373ad19-cd11-4d27-8936-27132ab9bf72-kube-api-access-8w6hm\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.641758 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7hptw" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.641787 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw9rb" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.950994 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b95cfcf9c-75qmn"] Nov 23 07:10:33 crc kubenswrapper[5028]: E1123 07:10:33.951700 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8373ad19-cd11-4d27-8936-27132ab9bf72" containerName="neutron-db-sync" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.951716 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8373ad19-cd11-4d27-8936-27132ab9bf72" containerName="neutron-db-sync" Nov 23 07:10:33 crc kubenswrapper[5028]: E1123 07:10:33.951727 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8685a3-3877-4017-b2f4-69474e17a008" containerName="barbican-db-sync" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.951734 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8685a3-3877-4017-b2f4-69474e17a008" containerName="barbican-db-sync" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.951924 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8373ad19-cd11-4d27-8936-27132ab9bf72" containerName="neutron-db-sync" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.951967 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8685a3-3877-4017-b2f4-69474e17a008" containerName="barbican-db-sync" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.952850 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:33 crc kubenswrapper[5028]: I1123 07:10:33.972632 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b95cfcf9c-75qmn"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.015012 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c59f98478-vbp6r"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.016509 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.029644 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.029795 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fnpjd" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.030259 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.030374 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-59c8549d57-5f4m7"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.032154 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039231 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039276 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-logs\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039315 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzszb\" (UniqueName: \"kubernetes.io/projected/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-kube-api-access-bzszb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039347 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn2t4\" (UniqueName: \"kubernetes.io/projected/534edd12-5e24-4d56-9f1a-944e1ed4a65b-kube-api-access-qn2t4\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039374 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-config\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039409 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039432 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039451 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-combined-ca-bundle\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039474 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-svc\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039491 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data-custom\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039517 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039562 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039583 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534edd12-5e24-4d56-9f1a-944e1ed4a65b-logs\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039598 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649db\" (UniqueName: \"kubernetes.io/projected/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-kube-api-access-649db\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039616 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data-custom\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.039633 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-combined-ca-bundle\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.042851 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.094193 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c59f98478-vbp6r"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.108165 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59c8549d57-5f4m7"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.132134 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b4c494dd6-rn255"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.141887 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.141930 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534edd12-5e24-4d56-9f1a-944e1ed4a65b-logs\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.141966 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649db\" (UniqueName: \"kubernetes.io/projected/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-kube-api-access-649db\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.141982 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data-custom\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.141997 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-combined-ca-bundle\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142022 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142042 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-logs\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142079 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzszb\" (UniqueName: \"kubernetes.io/projected/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-kube-api-access-bzszb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142107 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn2t4\" (UniqueName: \"kubernetes.io/projected/534edd12-5e24-4d56-9f1a-944e1ed4a65b-kube-api-access-qn2t4\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142134 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-config\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142168 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142189 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142208 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-combined-ca-bundle\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142231 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-svc\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142247 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data-custom\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.142267 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.143103 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.145326 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534edd12-5e24-4d56-9f1a-944e1ed4a65b-logs\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.146810 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.147271 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-logs\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.149730 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68b9d958bb-2lrmv"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.151609 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.152776 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-svc\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.155551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-config\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.156101 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.156633 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4bshb" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.157741 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-swift-storage-0\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.159983 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.160221 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.176394 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzszb\" (UniqueName: \"kubernetes.io/projected/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-kube-api-access-bzszb\") pod \"dnsmasq-dns-5b95cfcf9c-75qmn\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: E1123 07:10:34.176835 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.179861 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data-custom\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.183418 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.186480 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.188529 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-combined-ca-bundle\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.189108 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn2t4\" (UniqueName: \"kubernetes.io/projected/534edd12-5e24-4d56-9f1a-944e1ed4a65b-kube-api-access-qn2t4\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.192539 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data-custom\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.192990 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-combined-ca-bundle\") pod \"barbican-keystone-listener-5c59f98478-vbp6r\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.205751 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649db\" (UniqueName: \"kubernetes.io/projected/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-kube-api-access-649db\") pod \"barbican-worker-59c8549d57-5f4m7\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.231139 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b95cfcf9c-75qmn"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.231809 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:34 crc kubenswrapper[5028]: W1123 07:10:34.250178 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5f67379_72d4_46f4_844f_b00c8f912169.slice/crio-d58f844ef33434dcc4d87cdb6f841c511856443e9b8eb6494a0141400fb4de32 WatchSource:0}: Error finding container d58f844ef33434dcc4d87cdb6f841c511856443e9b8eb6494a0141400fb4de32: Status 404 returned error can't find the container with id d58f844ef33434dcc4d87cdb6f841c511856443e9b8eb6494a0141400fb4de32 Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.257890 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68b9d958bb-2lrmv"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.269411 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66b66f7449-4h6h2"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.270880 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.292389 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b66f7449-4h6h2"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.304512 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bcf84bcb8-5gw9t"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.307445 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.313087 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.337170 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bcf84bcb8-5gw9t"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.345236 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-config\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.345309 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-httpd-config\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.345350 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-ovndb-tls-certs\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.345468 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-combined-ca-bundle\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.345519 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnhz\" (UniqueName: \"kubernetes.io/projected/77ffa103-e538-400a-b062-6e7f61425356-kube-api-access-shnhz\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.346203 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79f64857b-ngrdb"] Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.383442 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.396541 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447178 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573fa185-1982-437e-b9d5-43a628ab5ee2-logs\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447413 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-config\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447440 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data-custom\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447488 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-svc\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447519 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-config\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447538 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-swift-storage-0\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447564 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-combined-ca-bundle\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447583 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-httpd-config\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447598 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447629 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hf8\" (UniqueName: \"kubernetes.io/projected/573fa185-1982-437e-b9d5-43a628ab5ee2-kube-api-access-c6hf8\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447651 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-ovndb-tls-certs\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447696 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-nb\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447733 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-combined-ca-bundle\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447754 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzb8d\" (UniqueName: \"kubernetes.io/projected/d07f7705-11fe-487d-9be8-24ba110dac9a-kube-api-access-hzb8d\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447775 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shnhz\" (UniqueName: \"kubernetes.io/projected/77ffa103-e538-400a-b062-6e7f61425356-kube-api-access-shnhz\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.447795 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-sb\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.456466 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-config\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.456470 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-ovndb-tls-certs\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.461792 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-httpd-config\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.465534 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-combined-ca-bundle\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.471835 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnhz\" (UniqueName: \"kubernetes.io/projected/77ffa103-e538-400a-b062-6e7f61425356-kube-api-access-shnhz\") pod \"neutron-68b9d958bb-2lrmv\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.507339 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549135 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzb8d\" (UniqueName: \"kubernetes.io/projected/d07f7705-11fe-487d-9be8-24ba110dac9a-kube-api-access-hzb8d\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549184 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-sb\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549225 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573fa185-1982-437e-b9d5-43a628ab5ee2-logs\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549244 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-config\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549266 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data-custom\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-svc\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549334 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-swift-storage-0\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549357 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-combined-ca-bundle\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549376 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549407 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6hf8\" (UniqueName: \"kubernetes.io/projected/573fa185-1982-437e-b9d5-43a628ab5ee2-kube-api-access-c6hf8\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.549452 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-nb\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.550338 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-nb\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.551789 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-svc\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.552182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-swift-storage-0\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.552557 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-sb\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.552882 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573fa185-1982-437e-b9d5-43a628ab5ee2-logs\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.553555 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-config\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.560262 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-combined-ca-bundle\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.579523 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.582298 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6hf8\" (UniqueName: \"kubernetes.io/projected/573fa185-1982-437e-b9d5-43a628ab5ee2-kube-api-access-c6hf8\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.609769 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzb8d\" (UniqueName: \"kubernetes.io/projected/d07f7705-11fe-487d-9be8-24ba110dac9a-kube-api-access-hzb8d\") pod \"dnsmasq-dns-66b66f7449-4h6h2\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.618075 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data-custom\") pod \"barbican-api-6bcf84bcb8-5gw9t\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.630094 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.689500 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerStarted","Data":"b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142"} Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.689760 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="ceilometer-notification-agent" containerID="cri-o://3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513" gracePeriod=30 Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.689937 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.690054 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="proxy-httpd" containerID="cri-o://b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142" gracePeriod=30 Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.690158 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="sg-core" containerID="cri-o://31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1" gracePeriod=30 Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.723153 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f64857b-ngrdb" event={"ID":"f5f67379-72d4-46f4-844f-b00c8f912169","Type":"ContainerStarted","Data":"0825116d750ee80fd532b8320d8c85f3b4fe208475c0c2c3203bb0ac33a3586a"} Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.723408 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.723419 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f64857b-ngrdb" event={"ID":"f5f67379-72d4-46f4-844f-b00c8f912169","Type":"ContainerStarted","Data":"d58f844ef33434dcc4d87cdb6f841c511856443e9b8eb6494a0141400fb4de32"} Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.763380 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4c494dd6-rn255" event={"ID":"2a3fd963-7f73-4069-a993-1dfec6751c57","Type":"ContainerStarted","Data":"f9e28eb9d85cec0a94161344fe470187e060552cb2ba5add91964b16fd771169"} Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.763430 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4c494dd6-rn255" event={"ID":"2a3fd963-7f73-4069-a993-1dfec6751c57","Type":"ContainerStarted","Data":"e36d88e83f9befa3fd088d496b13719b56f9915c234c134b23b30e8098107046"} Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.839008 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79f64857b-ngrdb" podStartSLOduration=6.838986171 podStartE2EDuration="6.838986171s" podCreationTimestamp="2025-11-23 07:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:34.788854952 +0000 UTC m=+1218.486259741" watchObservedRunningTime="2025-11-23 07:10:34.838986171 +0000 UTC m=+1218.536390950" Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.883154 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b95cfcf9c-75qmn"] Nov 23 07:10:34 crc kubenswrapper[5028]: W1123 07:10:34.896304 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbea5c718_2797_4a7d_bb5e_9e56a4cddf1d.slice/crio-9b39f8360f469b19c2afb59757429c745fb1ec866e7373d03bffd34635d61709 WatchSource:0}: Error finding container 9b39f8360f469b19c2afb59757429c745fb1ec866e7373d03bffd34635d61709: Status 404 returned error can't find the container with id 9b39f8360f469b19c2afb59757429c745fb1ec866e7373d03bffd34635d61709 Nov 23 07:10:34 crc kubenswrapper[5028]: I1123 07:10:34.898853 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.772427 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6nfnj" event={"ID":"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17","Type":"ContainerStarted","Data":"489d955f8bae0d4ecbdf8344baa829141870b26ffc1a299b4b9998131b5d5ea4"} Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.773740 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" event={"ID":"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d","Type":"ContainerStarted","Data":"ec2ddca318ffddb04d82d11e63feb4e7101a083513eace6ad8fdcc5a3d94f89d"} Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.773783 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" event={"ID":"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d","Type":"ContainerStarted","Data":"9b39f8360f469b19c2afb59757429c745fb1ec866e7373d03bffd34635d61709"} Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.773893 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" podUID="bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" containerName="init" containerID="cri-o://ec2ddca318ffddb04d82d11e63feb4e7101a083513eace6ad8fdcc5a3d94f89d" gracePeriod=10 Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.791396 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4c494dd6-rn255" event={"ID":"2a3fd963-7f73-4069-a993-1dfec6751c57","Type":"ContainerStarted","Data":"5c6c526d79aa0e5f2a9c03d7440a3625e79fc7e6164cb907a3f55aad201ead50"} Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.792170 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.792929 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6nfnj" podStartSLOduration=4.810578284 podStartE2EDuration="45.792841733s" podCreationTimestamp="2025-11-23 07:09:50 +0000 UTC" firstStartedPulling="2025-11-23 07:09:52.7045646 +0000 UTC m=+1176.401969379" lastFinishedPulling="2025-11-23 07:10:33.686828039 +0000 UTC m=+1217.384232828" observedRunningTime="2025-11-23 07:10:35.786680283 +0000 UTC m=+1219.484085062" watchObservedRunningTime="2025-11-23 07:10:35.792841733 +0000 UTC m=+1219.490246512" Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.794108 5028 generic.go:334] "Generic (PLEG): container finished" podID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerID="b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142" exitCode=0 Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.794128 5028 generic.go:334] "Generic (PLEG): container finished" podID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerID="31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1" exitCode=2 Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.794532 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerDied","Data":"b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142"} Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.794556 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerDied","Data":"31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1"} Nov 23 07:10:35 crc kubenswrapper[5028]: I1123 07:10:35.836784 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b4c494dd6-rn255" podStartSLOduration=7.8367631410000005 podStartE2EDuration="7.836763141s" podCreationTimestamp="2025-11-23 07:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:35.8334503 +0000 UTC m=+1219.530855079" watchObservedRunningTime="2025-11-23 07:10:35.836763141 +0000 UTC m=+1219.534167930" Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.061335 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c59f98478-vbp6r"] Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.150940 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-59c8549d57-5f4m7"] Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.211279 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b66f7449-4h6h2"] Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.239158 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bcf84bcb8-5gw9t"] Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.325569 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68b9d958bb-2lrmv"] Nov 23 07:10:36 crc kubenswrapper[5028]: W1123 07:10:36.346496 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ffa103_e538_400a_b062_6e7f61425356.slice/crio-83180198c6d66f92cf2a58da04644f3539e9629cd06186aa7da3839049558df2 WatchSource:0}: Error finding container 83180198c6d66f92cf2a58da04644f3539e9629cd06186aa7da3839049558df2: Status 404 returned error can't find the container with id 83180198c6d66f92cf2a58da04644f3539e9629cd06186aa7da3839049558df2 Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.838868 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" event={"ID":"534edd12-5e24-4d56-9f1a-944e1ed4a65b","Type":"ContainerStarted","Data":"2950ac751398b4cd6049e8fdfa0744256d107e8eaf1f0e75375a2020f577ee89"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.845388 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" event={"ID":"d07f7705-11fe-487d-9be8-24ba110dac9a","Type":"ContainerStarted","Data":"546cfbcd2ed04dac8ebc09498036b695ca889314b9fb5b598fd0f2bea9fb808a"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.845429 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" event={"ID":"d07f7705-11fe-487d-9be8-24ba110dac9a","Type":"ContainerStarted","Data":"b3bb068c2b8171f80650e170e84d6d01d7e481e916a29b2fcaf7db305a593024"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.847929 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68b9d958bb-2lrmv" event={"ID":"77ffa103-e538-400a-b062-6e7f61425356","Type":"ContainerStarted","Data":"d0afb828e48dcf0fbd5d9267062f1cac16ef87b41adc197c4bbc8dea8bdc1980"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.848044 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68b9d958bb-2lrmv" event={"ID":"77ffa103-e538-400a-b062-6e7f61425356","Type":"ContainerStarted","Data":"83180198c6d66f92cf2a58da04644f3539e9629cd06186aa7da3839049558df2"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.864996 5028 generic.go:334] "Generic (PLEG): container finished" podID="bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" containerID="ec2ddca318ffddb04d82d11e63feb4e7101a083513eace6ad8fdcc5a3d94f89d" exitCode=0 Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.865101 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" event={"ID":"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d","Type":"ContainerDied","Data":"ec2ddca318ffddb04d82d11e63feb4e7101a083513eace6ad8fdcc5a3d94f89d"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.866300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c8549d57-5f4m7" event={"ID":"83c6472a-6c69-45ff-b0c2-3bf66e0523f2","Type":"ContainerStarted","Data":"6281b766e479a340e6b0612d2e058b8598eb659f2c8a3b77efc44256a18ec2a7"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.868164 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" event={"ID":"573fa185-1982-437e-b9d5-43a628ab5ee2","Type":"ContainerStarted","Data":"2615070d3b8815e54e0b7edb40162492d505ba0ce3f8900f378b4e7fd3cf11d2"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.868194 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" event={"ID":"573fa185-1982-437e-b9d5-43a628ab5ee2","Type":"ContainerStarted","Data":"899942f084618b6c97f9ce4a73a6e96bcdd073e191828c11e61ca51974ae2668"} Nov 23 07:10:36 crc kubenswrapper[5028]: I1123 07:10:36.868212 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.114232 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.218165 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzszb\" (UniqueName: \"kubernetes.io/projected/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-kube-api-access-bzszb\") pod \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.219102 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-svc\") pod \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.219240 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-nb\") pod \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.219275 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-config\") pod \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.219353 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-sb\") pod \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.219395 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-swift-storage-0\") pod \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\" (UID: \"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d\") " Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.241397 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-kube-api-access-bzszb" (OuterVolumeSpecName: "kube-api-access-bzszb") pod "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" (UID: "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d"). InnerVolumeSpecName "kube-api-access-bzszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.263692 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" (UID: "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.266782 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" (UID: "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.285776 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-config" (OuterVolumeSpecName: "config") pod "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" (UID: "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.286374 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" (UID: "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.286417 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" (UID: "bea5c718-2797-4a7d-bb5e-9e56a4cddf1d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.322700 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.322743 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.322755 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.322765 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.322778 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzszb\" (UniqueName: \"kubernetes.io/projected/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-kube-api-access-bzszb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.322792 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.547772 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56d56d656c-8p7fn"] Nov 23 07:10:37 crc kubenswrapper[5028]: E1123 07:10:37.549451 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" containerName="init" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.549581 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" containerName="init" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.549879 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" containerName="init" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.551854 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.559279 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.559655 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.564634 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56d56d656c-8p7fn"] Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.627889 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-public-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.628116 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-ovndb-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.628202 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-internal-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.628331 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-combined-ca-bundle\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.628409 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mgzn\" (UniqueName: \"kubernetes.io/projected/f7d02425-cddb-44e9-983a-456b2dc4d6fe-kube-api-access-7mgzn\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.628505 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-httpd-config\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.628574 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-config\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.729758 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-config\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.730064 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-public-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.730141 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-ovndb-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.730245 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-internal-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.731641 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-combined-ca-bundle\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.731778 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-httpd-config\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.731859 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mgzn\" (UniqueName: \"kubernetes.io/projected/f7d02425-cddb-44e9-983a-456b2dc4d6fe-kube-api-access-7mgzn\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.734810 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-public-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.734906 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-ovndb-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.735537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-internal-tls-certs\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.735625 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-combined-ca-bundle\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.736484 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-config\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.746580 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-httpd-config\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.754646 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mgzn\" (UniqueName: \"kubernetes.io/projected/f7d02425-cddb-44e9-983a-456b2dc4d6fe-kube-api-access-7mgzn\") pod \"neutron-56d56d656c-8p7fn\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.871735 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.876157 5028 generic.go:334] "Generic (PLEG): container finished" podID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerID="546cfbcd2ed04dac8ebc09498036b695ca889314b9fb5b598fd0f2bea9fb808a" exitCode=0 Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.876293 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" event={"ID":"d07f7705-11fe-487d-9be8-24ba110dac9a","Type":"ContainerDied","Data":"546cfbcd2ed04dac8ebc09498036b695ca889314b9fb5b598fd0f2bea9fb808a"} Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.880600 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68b9d958bb-2lrmv" event={"ID":"77ffa103-e538-400a-b062-6e7f61425356","Type":"ContainerStarted","Data":"8d8356abe87ad54a026437824775e8156d07f25ef4850d0444c87cb4d71ed0e2"} Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.882493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" event={"ID":"bea5c718-2797-4a7d-bb5e-9e56a4cddf1d","Type":"ContainerDied","Data":"9b39f8360f469b19c2afb59757429c745fb1ec866e7373d03bffd34635d61709"} Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.882547 5028 scope.go:117] "RemoveContainer" containerID="ec2ddca318ffddb04d82d11e63feb4e7101a083513eace6ad8fdcc5a3d94f89d" Nov 23 07:10:37 crc kubenswrapper[5028]: I1123 07:10:37.882725 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b95cfcf9c-75qmn" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.123130 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b95cfcf9c-75qmn"] Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.147941 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b95cfcf9c-75qmn"] Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.419083 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56d56d656c-8p7fn"] Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.893921 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" event={"ID":"d07f7705-11fe-487d-9be8-24ba110dac9a","Type":"ContainerStarted","Data":"cd2681dcfce8f7732a81760ac09f887e3912267a3c7661c4353b9574b37422dd"} Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.894464 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.900227 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" event={"ID":"573fa185-1982-437e-b9d5-43a628ab5ee2","Type":"ContainerStarted","Data":"7e3d6583f81183730daa6f3092393a053d2d9a7c825fd26714e87f123ad7e913"} Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.900275 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.900292 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.900303 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.925889 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" podStartSLOduration=4.9258691070000005 podStartE2EDuration="4.925869107s" podCreationTimestamp="2025-11-23 07:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:38.918862617 +0000 UTC m=+1222.616267396" watchObservedRunningTime="2025-11-23 07:10:38.925869107 +0000 UTC m=+1222.623273886" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.946116 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68b9d958bb-2lrmv" podStartSLOduration=4.945003832 podStartE2EDuration="4.945003832s" podCreationTimestamp="2025-11-23 07:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:38.939500038 +0000 UTC m=+1222.636904817" watchObservedRunningTime="2025-11-23 07:10:38.945003832 +0000 UTC m=+1222.642408621" Nov 23 07:10:38 crc kubenswrapper[5028]: I1123 07:10:38.966459 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" podStartSLOduration=4.966439933 podStartE2EDuration="4.966439933s" podCreationTimestamp="2025-11-23 07:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:38.96135542 +0000 UTC m=+1222.658760209" watchObservedRunningTime="2025-11-23 07:10:38.966439933 +0000 UTC m=+1222.663844712" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.065000 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea5c718-2797-4a7d-bb5e-9e56a4cddf1d" path="/var/lib/kubelet/pods/bea5c718-2797-4a7d-bb5e-9e56a4cddf1d/volumes" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.836424 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.907663 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d56d656c-8p7fn" event={"ID":"f7d02425-cddb-44e9-983a-456b2dc4d6fe","Type":"ContainerStarted","Data":"940f74017dba594cbc47228face62b55ed6f8064b06190a9c015b8bd33b0e3f6"} Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.907703 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d56d656c-8p7fn" event={"ID":"f7d02425-cddb-44e9-983a-456b2dc4d6fe","Type":"ContainerStarted","Data":"df6bb19c129f226f28d28f8855ea9d5e76923a79a6784d7ce14fe385d5b16e81"} Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.908678 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c8549d57-5f4m7" event={"ID":"83c6472a-6c69-45ff-b0c2-3bf66e0523f2","Type":"ContainerStarted","Data":"8cdb5b500524c4ab468eb818cb3106416ad89266ed3a6334d118c5a750b1d5a5"} Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.911111 5028 generic.go:334] "Generic (PLEG): container finished" podID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerID="3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513" exitCode=0 Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.911160 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.911206 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerDied","Data":"3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513"} Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.911245 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"69d1e57b-89c3-49a4-95dc-537dcedf1c54","Type":"ContainerDied","Data":"8a99585857b02d37c82229af855b67933741c43c64e4649e76f0cece2a372699"} Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.911264 5028 scope.go:117] "RemoveContainer" containerID="b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.912873 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" event={"ID":"534edd12-5e24-4d56-9f1a-944e1ed4a65b","Type":"ContainerStarted","Data":"d694a3f6d26807a42af11ccff6f7020afb5c194adb048d335e4b7a16a625a72f"} Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.937632 5028 scope.go:117] "RemoveContainer" containerID="31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.967253 5028 scope.go:117] "RemoveContainer" containerID="3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.985368 5028 scope.go:117] "RemoveContainer" containerID="b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142" Nov 23 07:10:39 crc kubenswrapper[5028]: E1123 07:10:39.985896 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142\": container with ID starting with b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142 not found: ID does not exist" containerID="b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.985939 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142"} err="failed to get container status \"b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142\": rpc error: code = NotFound desc = could not find container \"b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142\": container with ID starting with b35b3744369eede892be1f905c1d6582c48dae3b69dcac546bf963a2c2467142 not found: ID does not exist" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.985989 5028 scope.go:117] "RemoveContainer" containerID="31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1" Nov 23 07:10:39 crc kubenswrapper[5028]: E1123 07:10:39.986348 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1\": container with ID starting with 31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1 not found: ID does not exist" containerID="31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.986377 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1"} err="failed to get container status \"31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1\": rpc error: code = NotFound desc = could not find container \"31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1\": container with ID starting with 31e13363d48b650d64a689094e23795f2219accf36399c59f2a14a7306b7f7f1 not found: ID does not exist" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.986398 5028 scope.go:117] "RemoveContainer" containerID="3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513" Nov 23 07:10:39 crc kubenswrapper[5028]: E1123 07:10:39.986614 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513\": container with ID starting with 3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513 not found: ID does not exist" containerID="3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513" Nov 23 07:10:39 crc kubenswrapper[5028]: I1123 07:10:39.986645 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513"} err="failed to get container status \"3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513\": rpc error: code = NotFound desc = could not find container \"3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513\": container with ID starting with 3a32b2e039684bee43886b8ce7695f85af52e16310c912981de9b8d6b9c8a513 not found: ID does not exist" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005478 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-run-httpd\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005639 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-combined-ca-bundle\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005821 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-sg-core-conf-yaml\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005850 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrbf\" (UniqueName: \"kubernetes.io/projected/69d1e57b-89c3-49a4-95dc-537dcedf1c54-kube-api-access-nfrbf\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005875 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-config-data\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005872 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005910 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-scripts\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.005963 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-log-httpd\") pod \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\" (UID: \"69d1e57b-89c3-49a4-95dc-537dcedf1c54\") " Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.006307 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.006567 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.013560 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d1e57b-89c3-49a4-95dc-537dcedf1c54-kube-api-access-nfrbf" (OuterVolumeSpecName: "kube-api-access-nfrbf") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "kube-api-access-nfrbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.015844 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-scripts" (OuterVolumeSpecName: "scripts") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.039695 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.062466 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.090202 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-config-data" (OuterVolumeSpecName: "config-data") pod "69d1e57b-89c3-49a4-95dc-537dcedf1c54" (UID: "69d1e57b-89c3-49a4-95dc-537dcedf1c54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.108160 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.108189 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69d1e57b-89c3-49a4-95dc-537dcedf1c54-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.108201 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.108212 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.108281 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrbf\" (UniqueName: \"kubernetes.io/projected/69d1e57b-89c3-49a4-95dc-537dcedf1c54-kube-api-access-nfrbf\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.108738 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d1e57b-89c3-49a4-95dc-537dcedf1c54-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.276228 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.291041 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305302 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:40 crc kubenswrapper[5028]: E1123 07:10:40.305659 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="ceilometer-notification-agent" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305675 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="ceilometer-notification-agent" Nov 23 07:10:40 crc kubenswrapper[5028]: E1123 07:10:40.305697 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="sg-core" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305704 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="sg-core" Nov 23 07:10:40 crc kubenswrapper[5028]: E1123 07:10:40.305718 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="proxy-httpd" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305724 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="proxy-httpd" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305887 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="sg-core" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305900 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="proxy-httpd" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.305913 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" containerName="ceilometer-notification-agent" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.307827 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.310287 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.310527 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.335865 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.393623 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d87c9f496-cstmz"] Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.399743 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.404349 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.404378 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.409079 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d87c9f496-cstmz"] Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416661 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416694 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-scripts\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416710 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-run-httpd\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416742 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-log-httpd\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416804 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-config-data\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416817 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.416857 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvr2\" (UniqueName: \"kubernetes.io/projected/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-kube-api-access-kwvr2\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518492 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-log-httpd\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518557 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-public-tls-certs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518609 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518628 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-internal-tls-certs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518649 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518668 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-config-data\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518865 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5sh\" (UniqueName: \"kubernetes.io/projected/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-kube-api-access-mz5sh\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518908 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-log-httpd\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.518929 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwvr2\" (UniqueName: \"kubernetes.io/projected/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-kube-api-access-kwvr2\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519052 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data-custom\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519114 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-combined-ca-bundle\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519137 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-logs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519165 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519180 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-scripts\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-run-httpd\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.519731 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-run-httpd\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.522730 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-config-data\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.522762 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.523164 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.523551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-scripts\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.538863 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwvr2\" (UniqueName: \"kubernetes.io/projected/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-kube-api-access-kwvr2\") pod \"ceilometer-0\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.620870 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-public-tls-certs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.621438 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.621542 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-internal-tls-certs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.621654 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz5sh\" (UniqueName: \"kubernetes.io/projected/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-kube-api-access-mz5sh\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.621753 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data-custom\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.621862 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-combined-ca-bundle\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.621927 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-logs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.622456 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-logs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.625600 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-public-tls-certs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.626312 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-internal-tls-certs\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.626636 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-combined-ca-bundle\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.626999 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.631736 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data-custom\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.636958 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.652038 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz5sh\" (UniqueName: \"kubernetes.io/projected/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-kube-api-access-mz5sh\") pod \"barbican-api-7d87c9f496-cstmz\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.738620 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.924140 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d56d656c-8p7fn" event={"ID":"f7d02425-cddb-44e9-983a-456b2dc4d6fe","Type":"ContainerStarted","Data":"5634973a7ea3c3061dced8c30254a7c5f72ea712c7865557fc5eee14be148b26"} Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.925564 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.926859 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c8549d57-5f4m7" event={"ID":"83c6472a-6c69-45ff-b0c2-3bf66e0523f2","Type":"ContainerStarted","Data":"ea5fcb780cb3db6a3d6792d1be448395cc967da3e44687458ed85dea64130699"} Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.949186 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" event={"ID":"534edd12-5e24-4d56-9f1a-944e1ed4a65b","Type":"ContainerStarted","Data":"e31414b733266ebd168e3f95a8474d2ad5c2d2b753b7b52893516e95d7e66b97"} Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.973750 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" podStartSLOduration=4.748038191 podStartE2EDuration="7.973729038s" podCreationTimestamp="2025-11-23 07:10:33 +0000 UTC" firstStartedPulling="2025-11-23 07:10:36.080101397 +0000 UTC m=+1219.777506176" lastFinishedPulling="2025-11-23 07:10:39.305792244 +0000 UTC m=+1223.003197023" observedRunningTime="2025-11-23 07:10:40.973610725 +0000 UTC m=+1224.671015514" watchObservedRunningTime="2025-11-23 07:10:40.973729038 +0000 UTC m=+1224.671133817" Nov 23 07:10:40 crc kubenswrapper[5028]: I1123 07:10:40.975589 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-56d56d656c-8p7fn" podStartSLOduration=3.975572893 podStartE2EDuration="3.975572893s" podCreationTimestamp="2025-11-23 07:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:40.952703817 +0000 UTC m=+1224.650108586" watchObservedRunningTime="2025-11-23 07:10:40.975572893 +0000 UTC m=+1224.672977672" Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.007206 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-59c8549d57-5f4m7" podStartSLOduration=4.866764717 podStartE2EDuration="8.007184581s" podCreationTimestamp="2025-11-23 07:10:33 +0000 UTC" firstStartedPulling="2025-11-23 07:10:36.175742752 +0000 UTC m=+1219.873147531" lastFinishedPulling="2025-11-23 07:10:39.316162616 +0000 UTC m=+1223.013567395" observedRunningTime="2025-11-23 07:10:40.998622223 +0000 UTC m=+1224.696027002" watchObservedRunningTime="2025-11-23 07:10:41.007184581 +0000 UTC m=+1224.704589360" Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.073759 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d1e57b-89c3-49a4-95dc-537dcedf1c54" path="/var/lib/kubelet/pods/69d1e57b-89c3-49a4-95dc-537dcedf1c54/volumes" Nov 23 07:10:41 crc kubenswrapper[5028]: W1123 07:10:41.126892 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0fbcc24_53e3_4d12_a5b7_7425ab4d1128.slice/crio-17c38227678516741eb948ad7664f0c153a562403f247f5a8cc81ffc605c9145 WatchSource:0}: Error finding container 17c38227678516741eb948ad7664f0c153a562403f247f5a8cc81ffc605c9145: Status 404 returned error can't find the container with id 17c38227678516741eb948ad7664f0c153a562403f247f5a8cc81ffc605c9145 Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.128599 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.281839 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d87c9f496-cstmz"] Nov 23 07:10:41 crc kubenswrapper[5028]: W1123 07:10:41.285590 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfee3f8a7_3e0a_4008_a7d7_2aeefc65c71c.slice/crio-9072656942ca2c8a425d1380b91f019df6ebd75464c8d48f3e189d6478bcbed7 WatchSource:0}: Error finding container 9072656942ca2c8a425d1380b91f019df6ebd75464c8d48f3e189d6478bcbed7: Status 404 returned error can't find the container with id 9072656942ca2c8a425d1380b91f019df6ebd75464c8d48f3e189d6478bcbed7 Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.963445 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerStarted","Data":"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12"} Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.963991 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerStarted","Data":"17c38227678516741eb948ad7664f0c153a562403f247f5a8cc81ffc605c9145"} Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.968078 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d87c9f496-cstmz" event={"ID":"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c","Type":"ContainerStarted","Data":"f479f9aaf79c6e583bdbd977ec74f93da1e58b297f19fd2858b002f4f930227c"} Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.968102 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d87c9f496-cstmz" event={"ID":"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c","Type":"ContainerStarted","Data":"c9b884444e010e2f9bac9f3e6dce5c53204fff813817cf07388304fa6d747bab"} Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.968112 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d87c9f496-cstmz" event={"ID":"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c","Type":"ContainerStarted","Data":"9072656942ca2c8a425d1380b91f019df6ebd75464c8d48f3e189d6478bcbed7"} Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.969257 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.969292 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:41 crc kubenswrapper[5028]: I1123 07:10:41.984413 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d87c9f496-cstmz" podStartSLOduration=1.9843950700000001 podStartE2EDuration="1.98439507s" podCreationTimestamp="2025-11-23 07:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:41.982505364 +0000 UTC m=+1225.679910143" watchObservedRunningTime="2025-11-23 07:10:41.98439507 +0000 UTC m=+1225.681799859" Nov 23 07:10:42 crc kubenswrapper[5028]: I1123 07:10:42.977690 5028 generic.go:334] "Generic (PLEG): container finished" podID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" containerID="489d955f8bae0d4ecbdf8344baa829141870b26ffc1a299b4b9998131b5d5ea4" exitCode=0 Nov 23 07:10:42 crc kubenswrapper[5028]: I1123 07:10:42.977759 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6nfnj" event={"ID":"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17","Type":"ContainerDied","Data":"489d955f8bae0d4ecbdf8344baa829141870b26ffc1a299b4b9998131b5d5ea4"} Nov 23 07:10:42 crc kubenswrapper[5028]: I1123 07:10:42.980355 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerStarted","Data":"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6"} Nov 23 07:10:43 crc kubenswrapper[5028]: I1123 07:10:43.995432 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerStarted","Data":"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29"} Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.369848 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498655 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-etc-machine-id\") pod \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498744 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj2vz\" (UniqueName: \"kubernetes.io/projected/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-kube-api-access-pj2vz\") pod \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498813 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-db-sync-config-data\") pod \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498826 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" (UID: "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498885 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-config-data\") pod \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498910 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-scripts\") pod \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.498991 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-combined-ca-bundle\") pod \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\" (UID: \"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17\") " Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.499418 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.505235 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-scripts" (OuterVolumeSpecName: "scripts") pod "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" (UID: "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.507383 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" (UID: "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.514534 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-kube-api-access-pj2vz" (OuterVolumeSpecName: "kube-api-access-pj2vz") pod "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" (UID: "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17"). InnerVolumeSpecName "kube-api-access-pj2vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.542027 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" (UID: "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.564530 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-config-data" (OuterVolumeSpecName: "config-data") pod "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" (UID: "4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.601689 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj2vz\" (UniqueName: \"kubernetes.io/projected/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-kube-api-access-pj2vz\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.601725 5028 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.601736 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.601746 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.601757 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:44 crc kubenswrapper[5028]: I1123 07:10:44.901082 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.081411 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6nfnj" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.097313 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798745f775-68pr2"] Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.097355 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.097366 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6nfnj" event={"ID":"4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17","Type":"ContainerDied","Data":"b8d84bb7a25dab6bc1fabc9d0487549dd1af647d805b94076370baebbdb178c0"} Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.097384 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8d84bb7a25dab6bc1fabc9d0487549dd1af647d805b94076370baebbdb178c0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.097393 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerStarted","Data":"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d"} Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.098241 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-798745f775-68pr2" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerName="dnsmasq-dns" containerID="cri-o://95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913" gracePeriod=10 Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.139839 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.685311768 podStartE2EDuration="5.139816859s" podCreationTimestamp="2025-11-23 07:10:40 +0000 UTC" firstStartedPulling="2025-11-23 07:10:41.131457273 +0000 UTC m=+1224.828862052" lastFinishedPulling="2025-11-23 07:10:44.585962364 +0000 UTC m=+1228.283367143" observedRunningTime="2025-11-23 07:10:45.120402247 +0000 UTC m=+1228.817807026" watchObservedRunningTime="2025-11-23 07:10:45.139816859 +0000 UTC m=+1228.837221638" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.310756 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:45 crc kubenswrapper[5028]: E1123 07:10:45.311410 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" containerName="cinder-db-sync" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.311423 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" containerName="cinder-db-sync" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.311629 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" containerName="cinder-db-sync" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.312520 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.323064 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ad4c9d0-1c02-424a-900f-9db27226d9bb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.323114 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf26n\" (UniqueName: \"kubernetes.io/projected/9ad4c9d0-1c02-424a-900f-9db27226d9bb-kube-api-access-jf26n\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.323141 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.323172 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.323197 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-scripts\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.323255 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.334782 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.335054 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.335167 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-2jnl7" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.335262 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.364052 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.410106 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7965876c4f-zc5ww"] Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.412600 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.425029 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.425100 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ad4c9d0-1c02-424a-900f-9db27226d9bb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.425132 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf26n\" (UniqueName: \"kubernetes.io/projected/9ad4c9d0-1c02-424a-900f-9db27226d9bb-kube-api-access-jf26n\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.425157 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.425186 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.425208 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-scripts\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.426038 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ad4c9d0-1c02-424a-900f-9db27226d9bb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.444299 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7965876c4f-zc5ww"] Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.444702 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.460529 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf26n\" (UniqueName: \"kubernetes.io/projected/9ad4c9d0-1c02-424a-900f-9db27226d9bb-kube-api-access-jf26n\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.462497 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.463520 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.466940 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-scripts\") pod \"cinder-scheduler-0\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.533537 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.533628 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-config\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.533704 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.533778 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-svc\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.533798 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.533822 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rtqp\" (UniqueName: \"kubernetes.io/projected/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-kube-api-access-2rtqp\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.558831 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.560630 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.564745 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.574691 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638046 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btphs\" (UniqueName: \"kubernetes.io/projected/8e04927b-9f5f-4bff-be51-20d208a78a0f-kube-api-access-btphs\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638083 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-scripts\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638113 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638134 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e04927b-9f5f-4bff-be51-20d208a78a0f-logs\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638174 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638196 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e04927b-9f5f-4bff-be51-20d208a78a0f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638220 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638293 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-svc\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638323 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638354 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rtqp\" (UniqueName: \"kubernetes.io/projected/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-kube-api-access-2rtqp\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638392 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638419 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.638454 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-config\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.639237 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-config\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.639862 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-swift-storage-0\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.640533 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-svc\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.646421 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-sb\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.646890 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-nb\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.666365 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.707502 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rtqp\" (UniqueName: \"kubernetes.io/projected/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-kube-api-access-2rtqp\") pod \"dnsmasq-dns-7965876c4f-zc5ww\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.739841 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.739884 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e04927b-9f5f-4bff-be51-20d208a78a0f-logs\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.739914 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e04927b-9f5f-4bff-be51-20d208a78a0f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.739931 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.740022 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.740083 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btphs\" (UniqueName: \"kubernetes.io/projected/8e04927b-9f5f-4bff-be51-20d208a78a0f-kube-api-access-btphs\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.740099 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-scripts\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.751209 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e04927b-9f5f-4bff-be51-20d208a78a0f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.756519 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-scripts\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.757100 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e04927b-9f5f-4bff-be51-20d208a78a0f-logs\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.759693 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.761567 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data-custom\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.765480 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.780704 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btphs\" (UniqueName: \"kubernetes.io/projected/8e04927b-9f5f-4bff-be51-20d208a78a0f-kube-api-access-btphs\") pod \"cinder-api-0\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.819810 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.880372 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:45 crc kubenswrapper[5028]: I1123 07:10:45.896393 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.055548 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-swift-storage-0\") pod \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.055634 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-sb\") pod \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.055670 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-nb\") pod \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.055738 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-svc\") pod \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.056447 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qcvb\" (UniqueName: \"kubernetes.io/projected/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-kube-api-access-6qcvb\") pod \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.056650 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-config\") pod \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\" (UID: \"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77\") " Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.063787 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-kube-api-access-6qcvb" (OuterVolumeSpecName: "kube-api-access-6qcvb") pod "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" (UID: "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77"). InnerVolumeSpecName "kube-api-access-6qcvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.146927 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" (UID: "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.147308 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" (UID: "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.156887 5028 generic.go:334] "Generic (PLEG): container finished" podID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerID="95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913" exitCode=0 Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.159392 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.159407 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.159416 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qcvb\" (UniqueName: \"kubernetes.io/projected/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-kube-api-access-6qcvb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.159486 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-798745f775-68pr2" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.160320 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798745f775-68pr2" event={"ID":"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77","Type":"ContainerDied","Data":"95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913"} Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.160346 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-798745f775-68pr2" event={"ID":"bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77","Type":"ContainerDied","Data":"c784bced4df2d5cba8562ce89ab3bfb288528e1e8de0d2cf71729299cae85829"} Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.160362 5028 scope.go:117] "RemoveContainer" containerID="95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.175421 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-config" (OuterVolumeSpecName: "config") pod "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" (UID: "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.242312 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" (UID: "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.243796 5028 scope.go:117] "RemoveContainer" containerID="b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.257569 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" (UID: "bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.270054 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.270084 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.270095 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.349356 5028 scope.go:117] "RemoveContainer" containerID="95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913" Nov 23 07:10:46 crc kubenswrapper[5028]: E1123 07:10:46.350777 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913\": container with ID starting with 95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913 not found: ID does not exist" containerID="95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.350815 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913"} err="failed to get container status \"95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913\": rpc error: code = NotFound desc = could not find container \"95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913\": container with ID starting with 95728c7c839f73b4cc5b044c4eef864176f3bb4a8da576f2f6fae58a5b80b913 not found: ID does not exist" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.350833 5028 scope.go:117] "RemoveContainer" containerID="b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84" Nov 23 07:10:46 crc kubenswrapper[5028]: E1123 07:10:46.351055 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84\": container with ID starting with b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84 not found: ID does not exist" containerID="b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.351070 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84"} err="failed to get container status \"b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84\": rpc error: code = NotFound desc = could not find container \"b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84\": container with ID starting with b0ae0db58b95de6a2e92f245f2b7c5587910c35a7e13f0b01ddf858b9f437a84 not found: ID does not exist" Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.356034 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.522875 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7965876c4f-zc5ww"] Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.531777 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-798745f775-68pr2"] Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.539442 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-798745f775-68pr2"] Nov 23 07:10:46 crc kubenswrapper[5028]: I1123 07:10:46.591141 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.104092 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" path="/var/lib/kubelet/pods/bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77/volumes" Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.167465 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.280925 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ad4c9d0-1c02-424a-900f-9db27226d9bb","Type":"ContainerStarted","Data":"3f88fda9ee7374a6ded559da7c36cc27d9284f5063bd53ce3cb35f61d7bea20f"} Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.317632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e04927b-9f5f-4bff-be51-20d208a78a0f","Type":"ContainerStarted","Data":"a8c1e8da5f6059e527b5ced74383c598da3df72ea5ad89ead3a129d1cea85b1f"} Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.336256 5028 generic.go:334] "Generic (PLEG): container finished" podID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerID="ac1e24102c8e305045c9774cd73e786db0592b89ca49b65c672608607a6701fa" exitCode=0 Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.336300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" event={"ID":"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc","Type":"ContainerDied","Data":"ac1e24102c8e305045c9774cd73e786db0592b89ca49b65c672608607a6701fa"} Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.336328 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" event={"ID":"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc","Type":"ContainerStarted","Data":"7a51766c92650d962dec57474ab596fd2795403eec39270bcf555ffb0f98866d"} Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.441233 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:47 crc kubenswrapper[5028]: I1123 07:10:47.442795 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:48 crc kubenswrapper[5028]: I1123 07:10:48.348413 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" event={"ID":"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc","Type":"ContainerStarted","Data":"9dfd92e623f40c9acc280dbb212734247aa57f06bb3da53e198bc52bfab450bb"} Nov 23 07:10:48 crc kubenswrapper[5028]: I1123 07:10:48.349587 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:48 crc kubenswrapper[5028]: I1123 07:10:48.352822 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ad4c9d0-1c02-424a-900f-9db27226d9bb","Type":"ContainerStarted","Data":"838f759bd47bf9f7518789583273dbebd929bd5875966b81a9e459143473d685"} Nov 23 07:10:48 crc kubenswrapper[5028]: I1123 07:10:48.355079 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e04927b-9f5f-4bff-be51-20d208a78a0f","Type":"ContainerStarted","Data":"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec"} Nov 23 07:10:48 crc kubenswrapper[5028]: I1123 07:10:48.373301 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" podStartSLOduration=3.373285376 podStartE2EDuration="3.373285376s" podCreationTimestamp="2025-11-23 07:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:48.366188434 +0000 UTC m=+1232.063593213" watchObservedRunningTime="2025-11-23 07:10:48.373285376 +0000 UTC m=+1232.070690145" Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.363927 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ad4c9d0-1c02-424a-900f-9db27226d9bb","Type":"ContainerStarted","Data":"48d4e75c14dedc9c49dfc74f70328222f45b05342aa264bb83c91e776ffe2617"} Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.366165 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e04927b-9f5f-4bff-be51-20d208a78a0f","Type":"ContainerStarted","Data":"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142"} Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.366415 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api-log" containerID="cri-o://874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec" gracePeriod=30 Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.366447 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api" containerID="cri-o://79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142" gracePeriod=30 Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.379846 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.702416878 podStartE2EDuration="4.379828688s" podCreationTimestamp="2025-11-23 07:10:45 +0000 UTC" firstStartedPulling="2025-11-23 07:10:46.367676913 +0000 UTC m=+1230.065081692" lastFinishedPulling="2025-11-23 07:10:47.045088723 +0000 UTC m=+1230.742493502" observedRunningTime="2025-11-23 07:10:49.379822518 +0000 UTC m=+1233.077227297" watchObservedRunningTime="2025-11-23 07:10:49.379828688 +0000 UTC m=+1233.077233467" Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.399357 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.399332552 podStartE2EDuration="4.399332552s" podCreationTimestamp="2025-11-23 07:10:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:49.396999736 +0000 UTC m=+1233.094404515" watchObservedRunningTime="2025-11-23 07:10:49.399332552 +0000 UTC m=+1233.096737331" Nov 23 07:10:49 crc kubenswrapper[5028]: I1123 07:10:49.978620 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056528 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btphs\" (UniqueName: \"kubernetes.io/projected/8e04927b-9f5f-4bff-be51-20d208a78a0f-kube-api-access-btphs\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056637 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e04927b-9f5f-4bff-be51-20d208a78a0f-logs\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056685 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-combined-ca-bundle\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056726 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e04927b-9f5f-4bff-be51-20d208a78a0f-etc-machine-id\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056746 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056772 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data-custom\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.056788 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-scripts\") pod \"8e04927b-9f5f-4bff-be51-20d208a78a0f\" (UID: \"8e04927b-9f5f-4bff-be51-20d208a78a0f\") " Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.057062 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e04927b-9f5f-4bff-be51-20d208a78a0f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.057484 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8e04927b-9f5f-4bff-be51-20d208a78a0f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.057795 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e04927b-9f5f-4bff-be51-20d208a78a0f-logs" (OuterVolumeSpecName: "logs") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.063333 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.063866 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e04927b-9f5f-4bff-be51-20d208a78a0f-kube-api-access-btphs" (OuterVolumeSpecName: "kube-api-access-btphs") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "kube-api-access-btphs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.065099 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-scripts" (OuterVolumeSpecName: "scripts") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.087317 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.135147 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data" (OuterVolumeSpecName: "config-data") pod "8e04927b-9f5f-4bff-be51-20d208a78a0f" (UID: "8e04927b-9f5f-4bff-be51-20d208a78a0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.158903 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e04927b-9f5f-4bff-be51-20d208a78a0f-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.158963 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.158981 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.158993 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.159004 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e04927b-9f5f-4bff-be51-20d208a78a0f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.159017 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btphs\" (UniqueName: \"kubernetes.io/projected/8e04927b-9f5f-4bff-be51-20d208a78a0f-kube-api-access-btphs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.394406 5028 generic.go:334] "Generic (PLEG): container finished" podID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerID="79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142" exitCode=0 Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.394443 5028 generic.go:334] "Generic (PLEG): container finished" podID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerID="874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec" exitCode=143 Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.395324 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.397323 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e04927b-9f5f-4bff-be51-20d208a78a0f","Type":"ContainerDied","Data":"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142"} Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.397407 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e04927b-9f5f-4bff-be51-20d208a78a0f","Type":"ContainerDied","Data":"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec"} Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.397424 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8e04927b-9f5f-4bff-be51-20d208a78a0f","Type":"ContainerDied","Data":"a8c1e8da5f6059e527b5ced74383c598da3df72ea5ad89ead3a129d1cea85b1f"} Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.397452 5028 scope.go:117] "RemoveContainer" containerID="79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.433935 5028 scope.go:117] "RemoveContainer" containerID="874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.442036 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.450621 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.510467 5028 scope.go:117] "RemoveContainer" containerID="79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142" Nov 23 07:10:50 crc kubenswrapper[5028]: E1123 07:10:50.514264 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142\": container with ID starting with 79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142 not found: ID does not exist" containerID="79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.514530 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142"} err="failed to get container status \"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142\": rpc error: code = NotFound desc = could not find container \"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142\": container with ID starting with 79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142 not found: ID does not exist" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.514677 5028 scope.go:117] "RemoveContainer" containerID="874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec" Nov 23 07:10:50 crc kubenswrapper[5028]: E1123 07:10:50.517383 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec\": container with ID starting with 874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec not found: ID does not exist" containerID="874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.517439 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec"} err="failed to get container status \"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec\": rpc error: code = NotFound desc = could not find container \"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec\": container with ID starting with 874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec not found: ID does not exist" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.517486 5028 scope.go:117] "RemoveContainer" containerID="79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.517828 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142"} err="failed to get container status \"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142\": rpc error: code = NotFound desc = could not find container \"79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142\": container with ID starting with 79ce7adeee8a5b25ad2698593c5620eebad4682d08ddbbd91c9631b30c3e5142 not found: ID does not exist" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.517870 5028 scope.go:117] "RemoveContainer" containerID="874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.518114 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec"} err="failed to get container status \"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec\": rpc error: code = NotFound desc = could not find container \"874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec\": container with ID starting with 874ab970896af3e2059f74537e83e191447fee4473ad9983eaa45b3c879893ec not found: ID does not exist" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.548548 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:50 crc kubenswrapper[5028]: E1123 07:10:50.548983 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549005 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api" Nov 23 07:10:50 crc kubenswrapper[5028]: E1123 07:10:50.549030 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api-log" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549036 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api-log" Nov 23 07:10:50 crc kubenswrapper[5028]: E1123 07:10:50.549051 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerName="dnsmasq-dns" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549057 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerName="dnsmasq-dns" Nov 23 07:10:50 crc kubenswrapper[5028]: E1123 07:10:50.549071 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerName="init" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549076 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerName="init" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549241 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcb8ce0-1ddd-463b-b0b8-064b4e30cc77" containerName="dnsmasq-dns" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549255 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.549268 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" containerName="cinder-api-log" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.550372 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.554916 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.555172 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.555337 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.566210 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.667642 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.678781 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/023257e8-ab54-4423-94bc-1f8d547afa69-logs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.678844 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.678906 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.678939 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggmb\" (UniqueName: \"kubernetes.io/projected/023257e8-ab54-4423-94bc-1f8d547afa69-kube-api-access-cggmb\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.679024 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data-custom\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.679248 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-scripts\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.679317 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.679441 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.679494 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/023257e8-ab54-4423-94bc-1f8d547afa69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.780793 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.780863 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggmb\" (UniqueName: \"kubernetes.io/projected/023257e8-ab54-4423-94bc-1f8d547afa69-kube-api-access-cggmb\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.780904 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data-custom\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.780994 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-scripts\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781026 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781083 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781115 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/023257e8-ab54-4423-94bc-1f8d547afa69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781139 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/023257e8-ab54-4423-94bc-1f8d547afa69-logs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781174 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781401 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/023257e8-ab54-4423-94bc-1f8d547afa69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.781971 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/023257e8-ab54-4423-94bc-1f8d547afa69-logs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.786830 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.789579 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.793513 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-scripts\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.793998 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data-custom\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.814065 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.815644 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.828600 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggmb\" (UniqueName: \"kubernetes.io/projected/023257e8-ab54-4423-94bc-1f8d547afa69-kube-api-access-cggmb\") pod \"cinder-api-0\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " pod="openstack/cinder-api-0" Nov 23 07:10:50 crc kubenswrapper[5028]: I1123 07:10:50.895418 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:10:51 crc kubenswrapper[5028]: I1123 07:10:51.065924 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e04927b-9f5f-4bff-be51-20d208a78a0f" path="/var/lib/kubelet/pods/8e04927b-9f5f-4bff-be51-20d208a78a0f/volumes" Nov 23 07:10:51 crc kubenswrapper[5028]: I1123 07:10:51.344903 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:10:51 crc kubenswrapper[5028]: W1123 07:10:51.353365 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod023257e8_ab54_4423_94bc_1f8d547afa69.slice/crio-2fe4470ed9c83e6c510c7c699fbcb65852fe334f40ba5099abb01026c18389a7 WatchSource:0}: Error finding container 2fe4470ed9c83e6c510c7c699fbcb65852fe334f40ba5099abb01026c18389a7: Status 404 returned error can't find the container with id 2fe4470ed9c83e6c510c7c699fbcb65852fe334f40ba5099abb01026c18389a7 Nov 23 07:10:51 crc kubenswrapper[5028]: I1123 07:10:51.425069 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"023257e8-ab54-4423-94bc-1f8d547afa69","Type":"ContainerStarted","Data":"2fe4470ed9c83e6c510c7c699fbcb65852fe334f40ba5099abb01026c18389a7"} Nov 23 07:10:52 crc kubenswrapper[5028]: I1123 07:10:52.398668 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:52 crc kubenswrapper[5028]: I1123 07:10:52.447408 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"023257e8-ab54-4423-94bc-1f8d547afa69","Type":"ContainerStarted","Data":"60a864f23434fe7bf4df4b751d820014f9fe0d63d486f0c74863fbcfa326e877"} Nov 23 07:10:52 crc kubenswrapper[5028]: I1123 07:10:52.641192 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:10:52 crc kubenswrapper[5028]: I1123 07:10:52.746083 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bcf84bcb8-5gw9t"] Nov 23 07:10:52 crc kubenswrapper[5028]: I1123 07:10:52.746393 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api-log" containerID="cri-o://2615070d3b8815e54e0b7edb40162492d505ba0ce3f8900f378b4e7fd3cf11d2" gracePeriod=30 Nov 23 07:10:52 crc kubenswrapper[5028]: I1123 07:10:52.746887 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api" containerID="cri-o://7e3d6583f81183730daa6f3092393a053d2d9a7c825fd26714e87f123ad7e913" gracePeriod=30 Nov 23 07:10:53 crc kubenswrapper[5028]: I1123 07:10:53.459886 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"023257e8-ab54-4423-94bc-1f8d547afa69","Type":"ContainerStarted","Data":"0eafcd1a07324ac9778cdfa4b78db65ef912e1e1d8dddb571f38dfd760d9566d"} Nov 23 07:10:53 crc kubenswrapper[5028]: I1123 07:10:53.461400 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 07:10:53 crc kubenswrapper[5028]: I1123 07:10:53.464039 5028 generic.go:334] "Generic (PLEG): container finished" podID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerID="2615070d3b8815e54e0b7edb40162492d505ba0ce3f8900f378b4e7fd3cf11d2" exitCode=143 Nov 23 07:10:53 crc kubenswrapper[5028]: I1123 07:10:53.464074 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" event={"ID":"573fa185-1982-437e-b9d5-43a628ab5ee2","Type":"ContainerDied","Data":"2615070d3b8815e54e0b7edb40162492d505ba0ce3f8900f378b4e7fd3cf11d2"} Nov 23 07:10:53 crc kubenswrapper[5028]: I1123 07:10:53.492508 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.492485971 podStartE2EDuration="3.492485971s" podCreationTimestamp="2025-11-23 07:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:10:53.479129356 +0000 UTC m=+1237.176534125" watchObservedRunningTime="2025-11-23 07:10:53.492485971 +0000 UTC m=+1237.189890750" Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.824185 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.899862 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.912808 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b66f7449-4h6h2"] Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.913138 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerName="dnsmasq-dns" containerID="cri-o://cd2681dcfce8f7732a81760ac09f887e3912267a3c7661c4353b9574b37422dd" gracePeriod=10 Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.951148 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.972227 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:34920->10.217.0.157:9311: read: connection reset by peer" Nov 23 07:10:55 crc kubenswrapper[5028]: I1123 07:10:55.972535 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:34932->10.217.0.157:9311: read: connection reset by peer" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.497592 5028 generic.go:334] "Generic (PLEG): container finished" podID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerID="7e3d6583f81183730daa6f3092393a053d2d9a7c825fd26714e87f123ad7e913" exitCode=0 Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.497921 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" event={"ID":"573fa185-1982-437e-b9d5-43a628ab5ee2","Type":"ContainerDied","Data":"7e3d6583f81183730daa6f3092393a053d2d9a7c825fd26714e87f123ad7e913"} Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.498058 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" event={"ID":"573fa185-1982-437e-b9d5-43a628ab5ee2","Type":"ContainerDied","Data":"899942f084618b6c97f9ce4a73a6e96bcdd073e191828c11e61ca51974ae2668"} Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.498074 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="899942f084618b6c97f9ce4a73a6e96bcdd073e191828c11e61ca51974ae2668" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.501247 5028 generic.go:334] "Generic (PLEG): container finished" podID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerID="cd2681dcfce8f7732a81760ac09f887e3912267a3c7661c4353b9574b37422dd" exitCode=0 Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.501424 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="cinder-scheduler" containerID="cri-o://838f759bd47bf9f7518789583273dbebd929bd5875966b81a9e459143473d685" gracePeriod=30 Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.501490 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" event={"ID":"d07f7705-11fe-487d-9be8-24ba110dac9a","Type":"ContainerDied","Data":"cd2681dcfce8f7732a81760ac09f887e3912267a3c7661c4353b9574b37422dd"} Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.501511 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" event={"ID":"d07f7705-11fe-487d-9be8-24ba110dac9a","Type":"ContainerDied","Data":"b3bb068c2b8171f80650e170e84d6d01d7e481e916a29b2fcaf7db305a593024"} Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.501519 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3bb068c2b8171f80650e170e84d6d01d7e481e916a29b2fcaf7db305a593024" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.502150 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="probe" containerID="cri-o://48d4e75c14dedc9c49dfc74f70328222f45b05342aa264bb83c91e776ffe2617" gracePeriod=30 Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.554689 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.556024 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689647 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6hf8\" (UniqueName: \"kubernetes.io/projected/573fa185-1982-437e-b9d5-43a628ab5ee2-kube-api-access-c6hf8\") pod \"573fa185-1982-437e-b9d5-43a628ab5ee2\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689697 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-combined-ca-bundle\") pod \"573fa185-1982-437e-b9d5-43a628ab5ee2\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689760 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-swift-storage-0\") pod \"d07f7705-11fe-487d-9be8-24ba110dac9a\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689791 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-config\") pod \"d07f7705-11fe-487d-9be8-24ba110dac9a\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689816 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-nb\") pod \"d07f7705-11fe-487d-9be8-24ba110dac9a\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689845 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-sb\") pod \"d07f7705-11fe-487d-9be8-24ba110dac9a\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689871 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data-custom\") pod \"573fa185-1982-437e-b9d5-43a628ab5ee2\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.689900 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-svc\") pod \"d07f7705-11fe-487d-9be8-24ba110dac9a\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.690058 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzb8d\" (UniqueName: \"kubernetes.io/projected/d07f7705-11fe-487d-9be8-24ba110dac9a-kube-api-access-hzb8d\") pod \"d07f7705-11fe-487d-9be8-24ba110dac9a\" (UID: \"d07f7705-11fe-487d-9be8-24ba110dac9a\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.690163 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573fa185-1982-437e-b9d5-43a628ab5ee2-logs\") pod \"573fa185-1982-437e-b9d5-43a628ab5ee2\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.690200 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data\") pod \"573fa185-1982-437e-b9d5-43a628ab5ee2\" (UID: \"573fa185-1982-437e-b9d5-43a628ab5ee2\") " Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.695990 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "573fa185-1982-437e-b9d5-43a628ab5ee2" (UID: "573fa185-1982-437e-b9d5-43a628ab5ee2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.698418 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07f7705-11fe-487d-9be8-24ba110dac9a-kube-api-access-hzb8d" (OuterVolumeSpecName: "kube-api-access-hzb8d") pod "d07f7705-11fe-487d-9be8-24ba110dac9a" (UID: "d07f7705-11fe-487d-9be8-24ba110dac9a"). InnerVolumeSpecName "kube-api-access-hzb8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.708751 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573fa185-1982-437e-b9d5-43a628ab5ee2-logs" (OuterVolumeSpecName: "logs") pod "573fa185-1982-437e-b9d5-43a628ab5ee2" (UID: "573fa185-1982-437e-b9d5-43a628ab5ee2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.720640 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573fa185-1982-437e-b9d5-43a628ab5ee2-kube-api-access-c6hf8" (OuterVolumeSpecName: "kube-api-access-c6hf8") pod "573fa185-1982-437e-b9d5-43a628ab5ee2" (UID: "573fa185-1982-437e-b9d5-43a628ab5ee2"). InnerVolumeSpecName "kube-api-access-c6hf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.731642 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "573fa185-1982-437e-b9d5-43a628ab5ee2" (UID: "573fa185-1982-437e-b9d5-43a628ab5ee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.751574 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d07f7705-11fe-487d-9be8-24ba110dac9a" (UID: "d07f7705-11fe-487d-9be8-24ba110dac9a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.754862 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-config" (OuterVolumeSpecName: "config") pod "d07f7705-11fe-487d-9be8-24ba110dac9a" (UID: "d07f7705-11fe-487d-9be8-24ba110dac9a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.756420 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d07f7705-11fe-487d-9be8-24ba110dac9a" (UID: "d07f7705-11fe-487d-9be8-24ba110dac9a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.763620 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data" (OuterVolumeSpecName: "config-data") pod "573fa185-1982-437e-b9d5-43a628ab5ee2" (UID: "573fa185-1982-437e-b9d5-43a628ab5ee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.783151 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d07f7705-11fe-487d-9be8-24ba110dac9a" (UID: "d07f7705-11fe-487d-9be8-24ba110dac9a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.789126 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d07f7705-11fe-487d-9be8-24ba110dac9a" (UID: "d07f7705-11fe-487d-9be8-24ba110dac9a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792856 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573fa185-1982-437e-b9d5-43a628ab5ee2-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792896 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792912 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6hf8\" (UniqueName: \"kubernetes.io/projected/573fa185-1982-437e-b9d5-43a628ab5ee2-kube-api-access-c6hf8\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792926 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792962 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792974 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792985 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.792995 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.793004 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/573fa185-1982-437e-b9d5-43a628ab5ee2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.793014 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07f7705-11fe-487d-9be8-24ba110dac9a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:56 crc kubenswrapper[5028]: I1123 07:10:56.793023 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzb8d\" (UniqueName: \"kubernetes.io/projected/d07f7705-11fe-487d-9be8-24ba110dac9a-kube-api-access-hzb8d\") on node \"crc\" DevicePath \"\"" Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.510799 5028 generic.go:334] "Generic (PLEG): container finished" podID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerID="48d4e75c14dedc9c49dfc74f70328222f45b05342aa264bb83c91e776ffe2617" exitCode=0 Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.510922 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b66f7449-4h6h2" Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.511053 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ad4c9d0-1c02-424a-900f-9db27226d9bb","Type":"ContainerDied","Data":"48d4e75c14dedc9c49dfc74f70328222f45b05342aa264bb83c91e776ffe2617"} Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.511182 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bcf84bcb8-5gw9t" Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.542826 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bcf84bcb8-5gw9t"] Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.554801 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6bcf84bcb8-5gw9t"] Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.562057 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b66f7449-4h6h2"] Nov 23 07:10:57 crc kubenswrapper[5028]: I1123 07:10:57.568353 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66b66f7449-4h6h2"] Nov 23 07:10:59 crc kubenswrapper[5028]: I1123 07:10:59.062315 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" path="/var/lib/kubelet/pods/573fa185-1982-437e-b9d5-43a628ab5ee2/volumes" Nov 23 07:10:59 crc kubenswrapper[5028]: I1123 07:10:59.063277 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" path="/var/lib/kubelet/pods/d07f7705-11fe-487d-9be8-24ba110dac9a/volumes" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.329989 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.335087 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.561515 5028 generic.go:334] "Generic (PLEG): container finished" podID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerID="838f759bd47bf9f7518789583273dbebd929bd5875966b81a9e459143473d685" exitCode=0 Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.562421 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ad4c9d0-1c02-424a-900f-9db27226d9bb","Type":"ContainerDied","Data":"838f759bd47bf9f7518789583273dbebd929bd5875966b81a9e459143473d685"} Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.562550 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9ad4c9d0-1c02-424a-900f-9db27226d9bb","Type":"ContainerDied","Data":"3f88fda9ee7374a6ded559da7c36cc27d9284f5063bd53ce3cb35f61d7bea20f"} Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.562624 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f88fda9ee7374a6ded559da7c36cc27d9284f5063bd53ce3cb35f61d7bea20f" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.615258 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.764212 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data\") pod \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.764333 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ad4c9d0-1c02-424a-900f-9db27226d9bb-etc-machine-id\") pod \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.764377 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf26n\" (UniqueName: \"kubernetes.io/projected/9ad4c9d0-1c02-424a-900f-9db27226d9bb-kube-api-access-jf26n\") pod \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.764399 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-scripts\") pod \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.764430 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-combined-ca-bundle\") pod \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.764490 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data-custom\") pod \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\" (UID: \"9ad4c9d0-1c02-424a-900f-9db27226d9bb\") " Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.767177 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ad4c9d0-1c02-424a-900f-9db27226d9bb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9ad4c9d0-1c02-424a-900f-9db27226d9bb" (UID: "9ad4c9d0-1c02-424a-900f-9db27226d9bb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.770709 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad4c9d0-1c02-424a-900f-9db27226d9bb-kube-api-access-jf26n" (OuterVolumeSpecName: "kube-api-access-jf26n") pod "9ad4c9d0-1c02-424a-900f-9db27226d9bb" (UID: "9ad4c9d0-1c02-424a-900f-9db27226d9bb"). InnerVolumeSpecName "kube-api-access-jf26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.771089 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-scripts" (OuterVolumeSpecName: "scripts") pod "9ad4c9d0-1c02-424a-900f-9db27226d9bb" (UID: "9ad4c9d0-1c02-424a-900f-9db27226d9bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.772051 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ad4c9d0-1c02-424a-900f-9db27226d9bb" (UID: "9ad4c9d0-1c02-424a-900f-9db27226d9bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.815801 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ad4c9d0-1c02-424a-900f-9db27226d9bb" (UID: "9ad4c9d0-1c02-424a-900f-9db27226d9bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.866112 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.866148 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ad4c9d0-1c02-424a-900f-9db27226d9bb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.866157 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf26n\" (UniqueName: \"kubernetes.io/projected/9ad4c9d0-1c02-424a-900f-9db27226d9bb-kube-api-access-jf26n\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.866169 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.866179 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.884613 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data" (OuterVolumeSpecName: "config-data") pod "9ad4c9d0-1c02-424a-900f-9db27226d9bb" (UID: "9ad4c9d0-1c02-424a-900f-9db27226d9bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.946192 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.946250 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:11:00 crc kubenswrapper[5028]: I1123 07:11:00.968329 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad4c9d0-1c02-424a-900f-9db27226d9bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.017711 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.268916 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.269358 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerName="init" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269381 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerName="init" Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.269395 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="cinder-scheduler" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269407 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="cinder-scheduler" Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.269426 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269437 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api" Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.269459 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerName="dnsmasq-dns" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269469 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerName="dnsmasq-dns" Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.269483 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="probe" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269493 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="probe" Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.269516 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api-log" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269529 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api-log" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269822 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07f7705-11fe-487d-9be8-24ba110dac9a" containerName="dnsmasq-dns" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.269850 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api-log" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.270727 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="cinder-scheduler" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.270753 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="573fa185-1982-437e-b9d5-43a628ab5ee2" containerName="barbican-api" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.270770 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" containerName="probe" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.271650 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.273898 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dgmjx" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.274018 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.274308 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.282754 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.374033 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvzh\" (UniqueName: \"kubernetes.io/projected/e2100d9d-d4e3-40aa-8082-e6536e2ed096-kube-api-access-kbvzh\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.374091 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.374344 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config-secret\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.374500 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.475788 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvzh\" (UniqueName: \"kubernetes.io/projected/e2100d9d-d4e3-40aa-8082-e6536e2ed096-kube-api-access-kbvzh\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.476123 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.476195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config-secret\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.476242 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.476896 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.479574 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.481383 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config-secret\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.493610 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvzh\" (UniqueName: \"kubernetes.io/projected/e2100d9d-d4e3-40aa-8082-e6536e2ed096-kube-api-access-kbvzh\") pod \"openstackclient\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.572299 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.594545 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.602385 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.604552 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.617703 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.623428 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: W1123 07:11:01.625845 5028 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: secrets "cinder-scheduler-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Nov 23 07:11:01 crc kubenswrapper[5028]: E1123 07:11:01.625890 5028 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-scheduler-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.648570 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.780604 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.780679 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.780811 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-scripts\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.780935 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wn4\" (UniqueName: \"kubernetes.io/projected/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-kube-api-access-s2wn4\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.781019 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.781104 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882426 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882486 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882557 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-scripts\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882590 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wn4\" (UniqueName: \"kubernetes.io/projected/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-kube-api-access-s2wn4\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882618 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.882725 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.892276 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.893227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-scripts\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.894547 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:01 crc kubenswrapper[5028]: I1123 07:11:01.910484 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wn4\" (UniqueName: \"kubernetes.io/projected/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-kube-api-access-s2wn4\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:02 crc kubenswrapper[5028]: I1123 07:11:02.283422 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 07:11:02 crc kubenswrapper[5028]: I1123 07:11:02.580811 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e2100d9d-d4e3-40aa-8082-e6536e2ed096","Type":"ContainerStarted","Data":"fd025b849847f7ab1c29e2cbb5f099701f859b0e0f86c5ef45cef091bad0325c"} Nov 23 07:11:02 crc kubenswrapper[5028]: E1123 07:11:02.897153 5028 secret.go:188] Couldn't get secret openstack/cinder-scheduler-config-data: failed to sync secret cache: timed out waiting for the condition Nov 23 07:11:02 crc kubenswrapper[5028]: E1123 07:11:02.897247 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom podName:794e1c4d-3639-4b06-9a8b-5597fe8fa4c4 nodeName:}" failed. No retries permitted until 2025-11-23 07:11:03.397225493 +0000 UTC m=+1247.094630272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom") pod "cinder-scheduler-0" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4") : failed to sync secret cache: timed out waiting for the condition Nov 23 07:11:02 crc kubenswrapper[5028]: I1123 07:11:02.953805 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 07:11:03 crc kubenswrapper[5028]: I1123 07:11:03.063196 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad4c9d0-1c02-424a-900f-9db27226d9bb" path="/var/lib/kubelet/pods/9ad4c9d0-1c02-424a-900f-9db27226d9bb/volumes" Nov 23 07:11:03 crc kubenswrapper[5028]: I1123 07:11:03.109162 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 07:11:03 crc kubenswrapper[5028]: I1123 07:11:03.410111 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:03 crc kubenswrapper[5028]: I1123 07:11:03.417680 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " pod="openstack/cinder-scheduler-0" Nov 23 07:11:03 crc kubenswrapper[5028]: I1123 07:11:03.555041 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:11:03 crc kubenswrapper[5028]: I1123 07:11:03.983446 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:11:03 crc kubenswrapper[5028]: W1123 07:11:03.994119 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod794e1c4d_3639_4b06_9a8b_5597fe8fa4c4.slice/crio-ef802c5878cc69c20eef6443ba71479d3d7090b1070300612dc4ff60af1999ef WatchSource:0}: Error finding container ef802c5878cc69c20eef6443ba71479d3d7090b1070300612dc4ff60af1999ef: Status 404 returned error can't find the container with id ef802c5878cc69c20eef6443ba71479d3d7090b1070300612dc4ff60af1999ef Nov 23 07:11:04 crc kubenswrapper[5028]: I1123 07:11:04.519672 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:11:04 crc kubenswrapper[5028]: I1123 07:11:04.605186 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4","Type":"ContainerStarted","Data":"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d"} Nov 23 07:11:04 crc kubenswrapper[5028]: I1123 07:11:04.606425 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4","Type":"ContainerStarted","Data":"ef802c5878cc69c20eef6443ba71479d3d7090b1070300612dc4ff60af1999ef"} Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.594683 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.596070 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="proxy-httpd" containerID="cri-o://ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" gracePeriod=30 Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.596189 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="sg-core" containerID="cri-o://94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" gracePeriod=30 Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.596230 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-notification-agent" containerID="cri-o://715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" gracePeriod=30 Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.595516 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-central-agent" containerID="cri-o://9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" gracePeriod=30 Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.602622 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.633147 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4","Type":"ContainerStarted","Data":"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df"} Nov 23 07:11:05 crc kubenswrapper[5028]: I1123 07:11:05.657458 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.657441533 podStartE2EDuration="4.657441533s" podCreationTimestamp="2025-11-23 07:11:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:05.653438915 +0000 UTC m=+1249.350843694" watchObservedRunningTime="2025-11-23 07:11:05.657441533 +0000 UTC m=+1249.354846312" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.132036 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-77b69c59d9-28nfd"] Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.170765 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.177213 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77b69c59d9-28nfd"] Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.177658 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.177865 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.178009 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283499 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-run-httpd\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283617 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-public-tls-certs\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283662 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-etc-swift\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283680 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-internal-tls-certs\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283711 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-log-httpd\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283753 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-config-data\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283785 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-combined-ca-bundle\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.283818 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l488c\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-kube-api-access-l488c\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386257 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-public-tls-certs\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-etc-swift\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386327 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-internal-tls-certs\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386353 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-log-httpd\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386390 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-config-data\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386417 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-combined-ca-bundle\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386454 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l488c\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-kube-api-access-l488c\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.386483 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-run-httpd\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.388844 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-run-httpd\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.389551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-log-httpd\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.402790 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-internal-tls-certs\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.403287 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-config-data\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.421346 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l488c\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-kube-api-access-l488c\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.431159 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-combined-ca-bundle\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.432881 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-public-tls-certs\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.432925 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-etc-swift\") pod \"swift-proxy-77b69c59d9-28nfd\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.510085 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.549679 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594244 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwvr2\" (UniqueName: \"kubernetes.io/projected/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-kube-api-access-kwvr2\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594286 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-combined-ca-bundle\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594431 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-config-data\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594464 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-run-httpd\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-log-httpd\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594562 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-sg-core-conf-yaml\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.594585 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-scripts\") pod \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\" (UID: \"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128\") " Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.595374 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.595619 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.603079 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-kube-api-access-kwvr2" (OuterVolumeSpecName: "kube-api-access-kwvr2") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "kube-api-access-kwvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.604663 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-scripts" (OuterVolumeSpecName: "scripts") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653264 5028 generic.go:334] "Generic (PLEG): container finished" podID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" exitCode=0 Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653295 5028 generic.go:334] "Generic (PLEG): container finished" podID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" exitCode=2 Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653301 5028 generic.go:334] "Generic (PLEG): container finished" podID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" exitCode=0 Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653308 5028 generic.go:334] "Generic (PLEG): container finished" podID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" exitCode=0 Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653352 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653397 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerDied","Data":"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d"} Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653433 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerDied","Data":"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29"} Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653448 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerDied","Data":"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6"} Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653469 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerDied","Data":"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12"} Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653479 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d0fbcc24-53e3-4d12-a5b7-7425ab4d1128","Type":"ContainerDied","Data":"17c38227678516741eb948ad7664f0c153a562403f247f5a8cc81ffc605c9145"} Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.653494 5028 scope.go:117] "RemoveContainer" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.663145 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.698532 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.699168 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.699263 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.699508 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.699615 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwvr2\" (UniqueName: \"kubernetes.io/projected/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-kube-api-access-kwvr2\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.707571 5028 scope.go:117] "RemoveContainer" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.790231 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-config-data" (OuterVolumeSpecName: "config-data") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.803914 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.817220 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" (UID: "d0fbcc24-53e3-4d12-a5b7-7425ab4d1128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.906273 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.916133 5028 scope.go:117] "RemoveContainer" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.943201 5028 scope.go:117] "RemoveContainer" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.988316 5028 scope.go:117] "RemoveContainer" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" Nov 23 07:11:06 crc kubenswrapper[5028]: E1123 07:11:06.991907 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": container with ID starting with ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d not found: ID does not exist" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.991940 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d"} err="failed to get container status \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": rpc error: code = NotFound desc = could not find container \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": container with ID starting with ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.991975 5028 scope.go:117] "RemoveContainer" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" Nov 23 07:11:06 crc kubenswrapper[5028]: E1123 07:11:06.992764 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": container with ID starting with 94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29 not found: ID does not exist" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.992806 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29"} err="failed to get container status \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": rpc error: code = NotFound desc = could not find container \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": container with ID starting with 94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.992835 5028 scope.go:117] "RemoveContainer" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" Nov 23 07:11:06 crc kubenswrapper[5028]: E1123 07:11:06.993535 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": container with ID starting with 715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6 not found: ID does not exist" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.993565 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6"} err="failed to get container status \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": rpc error: code = NotFound desc = could not find container \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": container with ID starting with 715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.993583 5028 scope.go:117] "RemoveContainer" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" Nov 23 07:11:06 crc kubenswrapper[5028]: E1123 07:11:06.993971 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": container with ID starting with 9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12 not found: ID does not exist" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.994024 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12"} err="failed to get container status \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": rpc error: code = NotFound desc = could not find container \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": container with ID starting with 9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.994247 5028 scope.go:117] "RemoveContainer" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.995009 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d"} err="failed to get container status \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": rpc error: code = NotFound desc = could not find container \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": container with ID starting with ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.995038 5028 scope.go:117] "RemoveContainer" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.995326 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29"} err="failed to get container status \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": rpc error: code = NotFound desc = could not find container \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": container with ID starting with 94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.995353 5028 scope.go:117] "RemoveContainer" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.996366 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6"} err="failed to get container status \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": rpc error: code = NotFound desc = could not find container \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": container with ID starting with 715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.996504 5028 scope.go:117] "RemoveContainer" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.996789 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12"} err="failed to get container status \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": rpc error: code = NotFound desc = could not find container \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": container with ID starting with 9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12 not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.996827 5028 scope.go:117] "RemoveContainer" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.998881 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d"} err="failed to get container status \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": rpc error: code = NotFound desc = could not find container \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": container with ID starting with ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d not found: ID does not exist" Nov 23 07:11:06 crc kubenswrapper[5028]: I1123 07:11:06.998958 5028 scope.go:117] "RemoveContainer" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.001609 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29"} err="failed to get container status \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": rpc error: code = NotFound desc = could not find container \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": container with ID starting with 94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29 not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.001690 5028 scope.go:117] "RemoveContainer" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.003445 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6"} err="failed to get container status \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": rpc error: code = NotFound desc = could not find container \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": container with ID starting with 715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6 not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.003481 5028 scope.go:117] "RemoveContainer" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.007634 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12"} err="failed to get container status \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": rpc error: code = NotFound desc = could not find container \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": container with ID starting with 9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12 not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.007681 5028 scope.go:117] "RemoveContainer" containerID="ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.008009 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d"} err="failed to get container status \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": rpc error: code = NotFound desc = could not find container \"ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d\": container with ID starting with ffac68235b1edca31a8bc237c1ef19bea57b88e65e3e187add90f393dbe4ff4d not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.008047 5028 scope.go:117] "RemoveContainer" containerID="94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.009335 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29"} err="failed to get container status \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": rpc error: code = NotFound desc = could not find container \"94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29\": container with ID starting with 94b537ecf539604a8d90bdb47a8904aa544707a7c2a950fd5e01b3d720647d29 not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.009380 5028 scope.go:117] "RemoveContainer" containerID="715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.010497 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6"} err="failed to get container status \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": rpc error: code = NotFound desc = could not find container \"715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6\": container with ID starting with 715442ce06c82c9a82233acc9b38168cbbf8e79bc6ee952af488dc5dcfe6e8d6 not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.010528 5028 scope.go:117] "RemoveContainer" containerID="9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.010847 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12"} err="failed to get container status \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": rpc error: code = NotFound desc = could not find container \"9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12\": container with ID starting with 9157d770a2cbe508c747b8b00ef247fc78ec034ebb40140bb97d5610e304df12 not found: ID does not exist" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.011852 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.024258 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.037285 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:07 crc kubenswrapper[5028]: E1123 07:11:07.037722 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-notification-agent" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.037740 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-notification-agent" Nov 23 07:11:07 crc kubenswrapper[5028]: E1123 07:11:07.037761 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="sg-core" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.037767 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="sg-core" Nov 23 07:11:07 crc kubenswrapper[5028]: E1123 07:11:07.037792 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-central-agent" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.037807 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-central-agent" Nov 23 07:11:07 crc kubenswrapper[5028]: E1123 07:11:07.037826 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="proxy-httpd" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.037832 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="proxy-httpd" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.038024 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="sg-core" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.038042 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-notification-agent" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.038055 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="ceilometer-central-agent" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.038074 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" containerName="proxy-httpd" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.039659 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.041446 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.047037 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.047255 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.079416 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0fbcc24-53e3-4d12-a5b7-7425ab4d1128" path="/var/lib/kubelet/pods/d0fbcc24-53e3-4d12-a5b7-7425ab4d1128/volumes" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.110804 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-config-data\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.110855 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9c6x\" (UniqueName: \"kubernetes.io/projected/bb568cc4-665e-4b25-aa53-c38d848de160-kube-api-access-m9c6x\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.110905 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-scripts\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.111867 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.111911 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-log-httpd\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.112399 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-run-httpd\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.112492 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.149230 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-77b69c59d9-28nfd"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.217895 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-run-httpd\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.217970 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.218277 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-config-data\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.218299 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9c6x\" (UniqueName: \"kubernetes.io/projected/bb568cc4-665e-4b25-aa53-c38d848de160-kube-api-access-m9c6x\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.218344 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-scripts\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.218389 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.220545 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-log-httpd\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.220979 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-log-httpd\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.219238 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-run-httpd\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.223516 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.224552 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-config-data\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.225736 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.235682 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-scripts\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.246600 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9c6x\" (UniqueName: \"kubernetes.io/projected/bb568cc4-665e-4b25-aa53-c38d848de160-kube-api-access-m9c6x\") pod \"ceilometer-0\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.303734 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kqp69"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.306998 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.317443 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kqp69"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.369578 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.400710 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xz4c6"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.402365 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.415024 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-35e8-account-create-ph7rt"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.416255 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.417579 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.427538 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-operator-scripts\") pod \"nova-api-db-create-kqp69\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.427672 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltf4\" (UniqueName: \"kubernetes.io/projected/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-kube-api-access-sltf4\") pod \"nova-api-db-create-kqp69\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.427833 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-35e8-account-create-ph7rt"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.440883 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xz4c6"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.537481 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-operator-scripts\") pod \"nova-api-db-create-kqp69\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.537678 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e8708a8-c087-4267-96c6-2eaa00f1905d-operator-scripts\") pod \"nova-cell0-db-create-xz4c6\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.537700 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kcg8\" (UniqueName: \"kubernetes.io/projected/1e8708a8-c087-4267-96c6-2eaa00f1905d-kube-api-access-4kcg8\") pod \"nova-cell0-db-create-xz4c6\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.537771 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz876\" (UniqueName: \"kubernetes.io/projected/51ad8dce-7019-4338-9916-70bee8bdcf00-kube-api-access-vz876\") pod \"nova-api-35e8-account-create-ph7rt\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.537804 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sltf4\" (UniqueName: \"kubernetes.io/projected/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-kube-api-access-sltf4\") pod \"nova-api-db-create-kqp69\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.537842 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad8dce-7019-4338-9916-70bee8bdcf00-operator-scripts\") pod \"nova-api-35e8-account-create-ph7rt\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.538773 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-operator-scripts\") pod \"nova-api-db-create-kqp69\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.556718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sltf4\" (UniqueName: \"kubernetes.io/projected/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-kube-api-access-sltf4\") pod \"nova-api-db-create-kqp69\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.623800 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8b1e-account-create-mq5tg"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.625184 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.631047 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.639752 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz876\" (UniqueName: \"kubernetes.io/projected/51ad8dce-7019-4338-9916-70bee8bdcf00-kube-api-access-vz876\") pod \"nova-api-35e8-account-create-ph7rt\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.639835 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad8dce-7019-4338-9916-70bee8bdcf00-operator-scripts\") pod \"nova-api-35e8-account-create-ph7rt\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.639942 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e8708a8-c087-4267-96c6-2eaa00f1905d-operator-scripts\") pod \"nova-cell0-db-create-xz4c6\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.639988 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kcg8\" (UniqueName: \"kubernetes.io/projected/1e8708a8-c087-4267-96c6-2eaa00f1905d-kube-api-access-4kcg8\") pod \"nova-cell0-db-create-xz4c6\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.646139 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad8dce-7019-4338-9916-70bee8bdcf00-operator-scripts\") pod \"nova-api-35e8-account-create-ph7rt\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.648745 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e8708a8-c087-4267-96c6-2eaa00f1905d-operator-scripts\") pod \"nova-cell0-db-create-xz4c6\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.650199 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.657715 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-dh5vx"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.659455 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.664430 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b69c59d9-28nfd" event={"ID":"230d8024-5d83-4742-9bf9-77bc956dd4a9","Type":"ContainerStarted","Data":"c2313efa3dc196f747bb767207978f9b4f70c79524bd34c535de3bdf4ae01e56"} Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.664469 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b69c59d9-28nfd" event={"ID":"230d8024-5d83-4742-9bf9-77bc956dd4a9","Type":"ContainerStarted","Data":"0ad4b7a0e7d2ec73592ccb88243a33a7bd7a54a36aedb7e571bfc89d8829f552"} Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.670040 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dh5vx"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.681976 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kcg8\" (UniqueName: \"kubernetes.io/projected/1e8708a8-c087-4267-96c6-2eaa00f1905d-kube-api-access-4kcg8\") pod \"nova-cell0-db-create-xz4c6\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.684558 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8b1e-account-create-mq5tg"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.686614 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz876\" (UniqueName: \"kubernetes.io/projected/51ad8dce-7019-4338-9916-70bee8bdcf00-kube-api-access-vz876\") pod \"nova-api-35e8-account-create-ph7rt\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.789393 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9p8c\" (UniqueName: \"kubernetes.io/projected/ed0b2201-09f6-4478-b00a-285dcd96ae12-kube-api-access-n9p8c\") pod \"nova-cell0-8b1e-account-create-mq5tg\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.789606 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3de4d5-2fff-47f3-b769-5f1db4973efd-operator-scripts\") pod \"nova-cell1-db-create-dh5vx\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.789647 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rc2s\" (UniqueName: \"kubernetes.io/projected/ff3de4d5-2fff-47f3-b769-5f1db4973efd-kube-api-access-7rc2s\") pod \"nova-cell1-db-create-dh5vx\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.789809 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0b2201-09f6-4478-b00a-285dcd96ae12-operator-scripts\") pod \"nova-cell0-8b1e-account-create-mq5tg\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.800612 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.803845 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.893003 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0b2201-09f6-4478-b00a-285dcd96ae12-operator-scripts\") pod \"nova-cell0-8b1e-account-create-mq5tg\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.895060 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0b2201-09f6-4478-b00a-285dcd96ae12-operator-scripts\") pod \"nova-cell0-8b1e-account-create-mq5tg\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.906374 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9p8c\" (UniqueName: \"kubernetes.io/projected/ed0b2201-09f6-4478-b00a-285dcd96ae12-kube-api-access-n9p8c\") pod \"nova-cell0-8b1e-account-create-mq5tg\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.906548 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3de4d5-2fff-47f3-b769-5f1db4973efd-operator-scripts\") pod \"nova-cell1-db-create-dh5vx\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.906598 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rc2s\" (UniqueName: \"kubernetes.io/projected/ff3de4d5-2fff-47f3-b769-5f1db4973efd-kube-api-access-7rc2s\") pod \"nova-cell1-db-create-dh5vx\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.927333 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-17ad-account-create-jc5w9"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.930733 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3de4d5-2fff-47f3-b769-5f1db4973efd-operator-scripts\") pod \"nova-cell1-db-create-dh5vx\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.942809 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rc2s\" (UniqueName: \"kubernetes.io/projected/ff3de4d5-2fff-47f3-b769-5f1db4973efd-kube-api-access-7rc2s\") pod \"nova-cell1-db-create-dh5vx\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.953404 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9p8c\" (UniqueName: \"kubernetes.io/projected/ed0b2201-09f6-4478-b00a-285dcd96ae12-kube-api-access-n9p8c\") pod \"nova-cell0-8b1e-account-create-mq5tg\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.967018 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17ad-account-create-jc5w9"] Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.967064 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.973004 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.974329 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:07 crc kubenswrapper[5028]: I1123 07:11:07.982028 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.016910 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.021406 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.092809 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68b9d958bb-2lrmv"] Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.093046 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68b9d958bb-2lrmv" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-api" containerID="cri-o://d0afb828e48dcf0fbd5d9267062f1cac16ef87b41adc197c4bbc8dea8bdc1980" gracePeriod=30 Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.093279 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68b9d958bb-2lrmv" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-httpd" containerID="cri-o://8d8356abe87ad54a026437824775e8156d07f25ef4850d0444c87cb4d71ed0e2" gracePeriod=30 Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.111081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r72s\" (UniqueName: \"kubernetes.io/projected/373db2ec-bd55-424f-bf32-41e7107d8102-kube-api-access-2r72s\") pod \"nova-cell1-17ad-account-create-jc5w9\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.111126 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373db2ec-bd55-424f-bf32-41e7107d8102-operator-scripts\") pod \"nova-cell1-17ad-account-create-jc5w9\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.214738 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r72s\" (UniqueName: \"kubernetes.io/projected/373db2ec-bd55-424f-bf32-41e7107d8102-kube-api-access-2r72s\") pod \"nova-cell1-17ad-account-create-jc5w9\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.214783 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373db2ec-bd55-424f-bf32-41e7107d8102-operator-scripts\") pod \"nova-cell1-17ad-account-create-jc5w9\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.216568 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373db2ec-bd55-424f-bf32-41e7107d8102-operator-scripts\") pod \"nova-cell1-17ad-account-create-jc5w9\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.240873 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r72s\" (UniqueName: \"kubernetes.io/projected/373db2ec-bd55-424f-bf32-41e7107d8102-kube-api-access-2r72s\") pod \"nova-cell1-17ad-account-create-jc5w9\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.371778 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.483565 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kqp69"] Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.556337 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.681967 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xz4c6"] Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.699259 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-35e8-account-create-ph7rt"] Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.718710 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerStarted","Data":"a5ccefac6762e0eedfb6e2e6685408a53a93d44638db5a4376e465550b0f8c6c"} Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.726719 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b69c59d9-28nfd" event={"ID":"230d8024-5d83-4742-9bf9-77bc956dd4a9","Type":"ContainerStarted","Data":"2a139570e27e4d3f8e409cd6102e980a7101788faa70c851a25210cd2a440e17"} Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.726769 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.726790 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.732602 5028 generic.go:334] "Generic (PLEG): container finished" podID="77ffa103-e538-400a-b062-6e7f61425356" containerID="8d8356abe87ad54a026437824775e8156d07f25ef4850d0444c87cb4d71ed0e2" exitCode=0 Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.732654 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68b9d958bb-2lrmv" event={"ID":"77ffa103-e538-400a-b062-6e7f61425356","Type":"ContainerDied","Data":"8d8356abe87ad54a026437824775e8156d07f25ef4850d0444c87cb4d71ed0e2"} Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.803434 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-77b69c59d9-28nfd" podStartSLOduration=2.803415052 podStartE2EDuration="2.803415052s" podCreationTimestamp="2025-11-23 07:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:08.777450691 +0000 UTC m=+1252.474855470" watchObservedRunningTime="2025-11-23 07:11:08.803415052 +0000 UTC m=+1252.500819831" Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.818800 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-dh5vx"] Nov 23 07:11:08 crc kubenswrapper[5028]: I1123 07:11:08.837458 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8b1e-account-create-mq5tg"] Nov 23 07:11:09 crc kubenswrapper[5028]: I1123 07:11:09.033991 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17ad-account-create-jc5w9"] Nov 23 07:11:13 crc kubenswrapper[5028]: W1123 07:11:13.821654 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e8708a8_c087_4267_96c6_2eaa00f1905d.slice/crio-4e0aeb4a3bd427b40cc941e04f5b087b510d71000adb0a731137151eb3f912b1 WatchSource:0}: Error finding container 4e0aeb4a3bd427b40cc941e04f5b087b510d71000adb0a731137151eb3f912b1: Status 404 returned error can't find the container with id 4e0aeb4a3bd427b40cc941e04f5b087b510d71000adb0a731137151eb3f912b1 Nov 23 07:11:13 crc kubenswrapper[5028]: W1123 07:11:13.837930 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod373db2ec_bd55_424f_bf32_41e7107d8102.slice/crio-4a500b9dbb762b893d20ddcca5dd527d8c57d496c2073465ab7f32c905567ea4 WatchSource:0}: Error finding container 4a500b9dbb762b893d20ddcca5dd527d8c57d496c2073465ab7f32c905567ea4: Status 404 returned error can't find the container with id 4a500b9dbb762b893d20ddcca5dd527d8c57d496c2073465ab7f32c905567ea4 Nov 23 07:11:13 crc kubenswrapper[5028]: W1123 07:11:13.850193 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded0b2201_09f6_4478_b00a_285dcd96ae12.slice/crio-215fce42863f54a88b9764bc353cdafabbce78d3f906411a3910fbed407c17c5 WatchSource:0}: Error finding container 215fce42863f54a88b9764bc353cdafabbce78d3f906411a3910fbed407c17c5: Status 404 returned error can't find the container with id 215fce42863f54a88b9764bc353cdafabbce78d3f906411a3910fbed407c17c5 Nov 23 07:11:13 crc kubenswrapper[5028]: I1123 07:11:13.859155 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.797055 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerStarted","Data":"28e4f11fa7886562ef508b9f41f117c2e9f00675f4d16609aeeea581abb3e881"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.798919 5028 generic.go:334] "Generic (PLEG): container finished" podID="373db2ec-bd55-424f-bf32-41e7107d8102" containerID="9f6f77c785c513f419c76be87f7280a8d9724d0b20cf91e0391a4a8a4591486d" exitCode=0 Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.798981 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17ad-account-create-jc5w9" event={"ID":"373db2ec-bd55-424f-bf32-41e7107d8102","Type":"ContainerDied","Data":"9f6f77c785c513f419c76be87f7280a8d9724d0b20cf91e0391a4a8a4591486d"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.798998 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17ad-account-create-jc5w9" event={"ID":"373db2ec-bd55-424f-bf32-41e7107d8102","Type":"ContainerStarted","Data":"4a500b9dbb762b893d20ddcca5dd527d8c57d496c2073465ab7f32c905567ea4"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.800362 5028 generic.go:334] "Generic (PLEG): container finished" podID="ed0b2201-09f6-4478-b00a-285dcd96ae12" containerID="477afbc1bdc34977d24d5e8925956af63ee650a2490e70a160d5ae0e9bae6c9b" exitCode=0 Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.800530 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" event={"ID":"ed0b2201-09f6-4478-b00a-285dcd96ae12","Type":"ContainerDied","Data":"477afbc1bdc34977d24d5e8925956af63ee650a2490e70a160d5ae0e9bae6c9b"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.800551 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" event={"ID":"ed0b2201-09f6-4478-b00a-285dcd96ae12","Type":"ContainerStarted","Data":"215fce42863f54a88b9764bc353cdafabbce78d3f906411a3910fbed407c17c5"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.802388 5028 generic.go:334] "Generic (PLEG): container finished" podID="ff3de4d5-2fff-47f3-b769-5f1db4973efd" containerID="f218f774db4ceb31b910144408d32162ae8bbf909051deba432e8f2bc15e5458" exitCode=0 Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.802445 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dh5vx" event={"ID":"ff3de4d5-2fff-47f3-b769-5f1db4973efd","Type":"ContainerDied","Data":"f218f774db4ceb31b910144408d32162ae8bbf909051deba432e8f2bc15e5458"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.802468 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dh5vx" event={"ID":"ff3de4d5-2fff-47f3-b769-5f1db4973efd","Type":"ContainerStarted","Data":"5db65c7eed1151333a570779fe7072672f848cc0ef578ed80e35036a2ad7cbb9"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.804817 5028 generic.go:334] "Generic (PLEG): container finished" podID="1e8708a8-c087-4267-96c6-2eaa00f1905d" containerID="3131435438b601514b90e4b84200190dc90bb9ec48c9c5011919f75f86066438" exitCode=0 Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.804858 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xz4c6" event={"ID":"1e8708a8-c087-4267-96c6-2eaa00f1905d","Type":"ContainerDied","Data":"3131435438b601514b90e4b84200190dc90bb9ec48c9c5011919f75f86066438"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.804994 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xz4c6" event={"ID":"1e8708a8-c087-4267-96c6-2eaa00f1905d","Type":"ContainerStarted","Data":"4e0aeb4a3bd427b40cc941e04f5b087b510d71000adb0a731137151eb3f912b1"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.808472 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e2100d9d-d4e3-40aa-8082-e6536e2ed096","Type":"ContainerStarted","Data":"378f53f6920941038405cc29b0d2089fe43431d937244b33f18351586a7ec8e8"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.816168 5028 generic.go:334] "Generic (PLEG): container finished" podID="51ad8dce-7019-4338-9916-70bee8bdcf00" containerID="7a3f640c803b7aede08da9084eb7cbdccf31faa94c005b15c074e8234995862a" exitCode=0 Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.816228 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-35e8-account-create-ph7rt" event={"ID":"51ad8dce-7019-4338-9916-70bee8bdcf00","Type":"ContainerDied","Data":"7a3f640c803b7aede08da9084eb7cbdccf31faa94c005b15c074e8234995862a"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.816256 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-35e8-account-create-ph7rt" event={"ID":"51ad8dce-7019-4338-9916-70bee8bdcf00","Type":"ContainerStarted","Data":"a63f9bab1913f99d7c8baa5e5fbf37f651cc58e7ea1fd91d84a2b07452f17d86"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.826598 5028 generic.go:334] "Generic (PLEG): container finished" podID="c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" containerID="1f403dbae9bd40294ebff3aa78450948e835cd967861b28e083f238438f08caa" exitCode=0 Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.826654 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqp69" event={"ID":"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0","Type":"ContainerDied","Data":"1f403dbae9bd40294ebff3aa78450948e835cd967861b28e083f238438f08caa"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.826683 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqp69" event={"ID":"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0","Type":"ContainerStarted","Data":"735ce673f2f68fef1b6abe1c6c278ec68ac1b01e389039da009991eac4a948a1"} Nov 23 07:11:14 crc kubenswrapper[5028]: I1123 07:11:14.838769 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.098325842 podStartE2EDuration="13.83870543s" podCreationTimestamp="2025-11-23 07:11:01 +0000 UTC" firstStartedPulling="2025-11-23 07:11:02.288730178 +0000 UTC m=+1245.986134957" lastFinishedPulling="2025-11-23 07:11:14.029109746 +0000 UTC m=+1257.726514545" observedRunningTime="2025-11-23 07:11:14.836688311 +0000 UTC m=+1258.534093090" watchObservedRunningTime="2025-11-23 07:11:14.83870543 +0000 UTC m=+1258.536110199" Nov 23 07:11:15 crc kubenswrapper[5028]: I1123 07:11:15.689304 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:15 crc kubenswrapper[5028]: I1123 07:11:15.837785 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerStarted","Data":"ed729075547917d8a415a9875cf82bce6c3ac43d3e2a2f19a7eb12f716d36b3c"} Nov 23 07:11:15 crc kubenswrapper[5028]: I1123 07:11:15.838147 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerStarted","Data":"d253cc7be2871c20344f5349c407a022e0c05a1fffc34acff4a1c3affe6a9aed"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.283836 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.377778 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3de4d5-2fff-47f3-b769-5f1db4973efd-operator-scripts\") pod \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.378103 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rc2s\" (UniqueName: \"kubernetes.io/projected/ff3de4d5-2fff-47f3-b769-5f1db4973efd-kube-api-access-7rc2s\") pod \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\" (UID: \"ff3de4d5-2fff-47f3-b769-5f1db4973efd\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.378907 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff3de4d5-2fff-47f3-b769-5f1db4973efd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff3de4d5-2fff-47f3-b769-5f1db4973efd" (UID: "ff3de4d5-2fff-47f3-b769-5f1db4973efd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.394101 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff3de4d5-2fff-47f3-b769-5f1db4973efd-kube-api-access-7rc2s" (OuterVolumeSpecName: "kube-api-access-7rc2s") pod "ff3de4d5-2fff-47f3-b769-5f1db4973efd" (UID: "ff3de4d5-2fff-47f3-b769-5f1db4973efd"). InnerVolumeSpecName "kube-api-access-7rc2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.480372 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rc2s\" (UniqueName: \"kubernetes.io/projected/ff3de4d5-2fff-47f3-b769-5f1db4973efd-kube-api-access-7rc2s\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.480405 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff3de4d5-2fff-47f3-b769-5f1db4973efd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.559612 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.560940 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.584079 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.589661 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.598413 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.609096 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.616775 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.683475 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e8708a8-c087-4267-96c6-2eaa00f1905d-operator-scripts\") pod \"1e8708a8-c087-4267-96c6-2eaa00f1905d\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.683534 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r72s\" (UniqueName: \"kubernetes.io/projected/373db2ec-bd55-424f-bf32-41e7107d8102-kube-api-access-2r72s\") pod \"373db2ec-bd55-424f-bf32-41e7107d8102\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.683570 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad8dce-7019-4338-9916-70bee8bdcf00-operator-scripts\") pod \"51ad8dce-7019-4338-9916-70bee8bdcf00\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.683621 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373db2ec-bd55-424f-bf32-41e7107d8102-operator-scripts\") pod \"373db2ec-bd55-424f-bf32-41e7107d8102\" (UID: \"373db2ec-bd55-424f-bf32-41e7107d8102\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.683690 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kcg8\" (UniqueName: \"kubernetes.io/projected/1e8708a8-c087-4267-96c6-2eaa00f1905d-kube-api-access-4kcg8\") pod \"1e8708a8-c087-4267-96c6-2eaa00f1905d\" (UID: \"1e8708a8-c087-4267-96c6-2eaa00f1905d\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.683835 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz876\" (UniqueName: \"kubernetes.io/projected/51ad8dce-7019-4338-9916-70bee8bdcf00-kube-api-access-vz876\") pod \"51ad8dce-7019-4338-9916-70bee8bdcf00\" (UID: \"51ad8dce-7019-4338-9916-70bee8bdcf00\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.684465 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e8708a8-c087-4267-96c6-2eaa00f1905d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e8708a8-c087-4267-96c6-2eaa00f1905d" (UID: "1e8708a8-c087-4267-96c6-2eaa00f1905d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.684722 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e8708a8-c087-4267-96c6-2eaa00f1905d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.685632 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ad8dce-7019-4338-9916-70bee8bdcf00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51ad8dce-7019-4338-9916-70bee8bdcf00" (UID: "51ad8dce-7019-4338-9916-70bee8bdcf00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.685919 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/373db2ec-bd55-424f-bf32-41e7107d8102-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "373db2ec-bd55-424f-bf32-41e7107d8102" (UID: "373db2ec-bd55-424f-bf32-41e7107d8102"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.692903 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e8708a8-c087-4267-96c6-2eaa00f1905d-kube-api-access-4kcg8" (OuterVolumeSpecName: "kube-api-access-4kcg8") pod "1e8708a8-c087-4267-96c6-2eaa00f1905d" (UID: "1e8708a8-c087-4267-96c6-2eaa00f1905d"). InnerVolumeSpecName "kube-api-access-4kcg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.693056 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373db2ec-bd55-424f-bf32-41e7107d8102-kube-api-access-2r72s" (OuterVolumeSpecName: "kube-api-access-2r72s") pod "373db2ec-bd55-424f-bf32-41e7107d8102" (UID: "373db2ec-bd55-424f-bf32-41e7107d8102"). InnerVolumeSpecName "kube-api-access-2r72s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.693125 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ad8dce-7019-4338-9916-70bee8bdcf00-kube-api-access-vz876" (OuterVolumeSpecName: "kube-api-access-vz876") pod "51ad8dce-7019-4338-9916-70bee8bdcf00" (UID: "51ad8dce-7019-4338-9916-70bee8bdcf00"). InnerVolumeSpecName "kube-api-access-vz876". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.785758 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sltf4\" (UniqueName: \"kubernetes.io/projected/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-kube-api-access-sltf4\") pod \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786005 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-operator-scripts\") pod \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\" (UID: \"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786038 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0b2201-09f6-4478-b00a-285dcd96ae12-operator-scripts\") pod \"ed0b2201-09f6-4478-b00a-285dcd96ae12\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786099 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9p8c\" (UniqueName: \"kubernetes.io/projected/ed0b2201-09f6-4478-b00a-285dcd96ae12-kube-api-access-n9p8c\") pod \"ed0b2201-09f6-4478-b00a-285dcd96ae12\" (UID: \"ed0b2201-09f6-4478-b00a-285dcd96ae12\") " Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786910 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz876\" (UniqueName: \"kubernetes.io/projected/51ad8dce-7019-4338-9916-70bee8bdcf00-kube-api-access-vz876\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786938 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r72s\" (UniqueName: \"kubernetes.io/projected/373db2ec-bd55-424f-bf32-41e7107d8102-kube-api-access-2r72s\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786962 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51ad8dce-7019-4338-9916-70bee8bdcf00-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786971 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/373db2ec-bd55-424f-bf32-41e7107d8102-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.786981 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kcg8\" (UniqueName: \"kubernetes.io/projected/1e8708a8-c087-4267-96c6-2eaa00f1905d-kube-api-access-4kcg8\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.787148 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" (UID: "c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.787319 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed0b2201-09f6-4478-b00a-285dcd96ae12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed0b2201-09f6-4478-b00a-285dcd96ae12" (UID: "ed0b2201-09f6-4478-b00a-285dcd96ae12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.792490 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-kube-api-access-sltf4" (OuterVolumeSpecName: "kube-api-access-sltf4") pod "c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" (UID: "c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0"). InnerVolumeSpecName "kube-api-access-sltf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.792663 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed0b2201-09f6-4478-b00a-285dcd96ae12-kube-api-access-n9p8c" (OuterVolumeSpecName: "kube-api-access-n9p8c") pod "ed0b2201-09f6-4478-b00a-285dcd96ae12" (UID: "ed0b2201-09f6-4478-b00a-285dcd96ae12"). InnerVolumeSpecName "kube-api-access-n9p8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.847100 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-dh5vx" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.849294 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-dh5vx" event={"ID":"ff3de4d5-2fff-47f3-b769-5f1db4973efd","Type":"ContainerDied","Data":"5db65c7eed1151333a570779fe7072672f848cc0ef578ed80e35036a2ad7cbb9"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.849338 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5db65c7eed1151333a570779fe7072672f848cc0ef578ed80e35036a2ad7cbb9" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.853329 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xz4c6" event={"ID":"1e8708a8-c087-4267-96c6-2eaa00f1905d","Type":"ContainerDied","Data":"4e0aeb4a3bd427b40cc941e04f5b087b510d71000adb0a731137151eb3f912b1"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.853375 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0aeb4a3bd427b40cc941e04f5b087b510d71000adb0a731137151eb3f912b1" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.853442 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xz4c6" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.865146 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-35e8-account-create-ph7rt" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.867724 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-35e8-account-create-ph7rt" event={"ID":"51ad8dce-7019-4338-9916-70bee8bdcf00","Type":"ContainerDied","Data":"a63f9bab1913f99d7c8baa5e5fbf37f651cc58e7ea1fd91d84a2b07452f17d86"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.867825 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63f9bab1913f99d7c8baa5e5fbf37f651cc58e7ea1fd91d84a2b07452f17d86" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.870435 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqp69" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.870427 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqp69" event={"ID":"c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0","Type":"ContainerDied","Data":"735ce673f2f68fef1b6abe1c6c278ec68ac1b01e389039da009991eac4a948a1"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.871436 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735ce673f2f68fef1b6abe1c6c278ec68ac1b01e389039da009991eac4a948a1" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.874976 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17ad-account-create-jc5w9" event={"ID":"373db2ec-bd55-424f-bf32-41e7107d8102","Type":"ContainerDied","Data":"4a500b9dbb762b893d20ddcca5dd527d8c57d496c2073465ab7f32c905567ea4"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.875011 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a500b9dbb762b893d20ddcca5dd527d8c57d496c2073465ab7f32c905567ea4" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.874992 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17ad-account-create-jc5w9" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.891522 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.891575 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8b1e-account-create-mq5tg" event={"ID":"ed0b2201-09f6-4478-b00a-285dcd96ae12","Type":"ContainerDied","Data":"215fce42863f54a88b9764bc353cdafabbce78d3f906411a3910fbed407c17c5"} Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.891610 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215fce42863f54a88b9764bc353cdafabbce78d3f906411a3910fbed407c17c5" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.898941 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.899448 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed0b2201-09f6-4478-b00a-285dcd96ae12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.907839 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9p8c\" (UniqueName: \"kubernetes.io/projected/ed0b2201-09f6-4478-b00a-285dcd96ae12-kube-api-access-n9p8c\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:16 crc kubenswrapper[5028]: I1123 07:11:16.909455 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sltf4\" (UniqueName: \"kubernetes.io/projected/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0-kube-api-access-sltf4\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.904116 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerStarted","Data":"a7af1adc720fee3ae2e69abfaea15a656aabbf840175e2eb9f1876b898234e6f"} Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.904572 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.904462 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-central-agent" containerID="cri-o://28e4f11fa7886562ef508b9f41f117c2e9f00675f4d16609aeeea581abb3e881" gracePeriod=30 Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.905110 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-notification-agent" containerID="cri-o://d253cc7be2871c20344f5349c407a022e0c05a1fffc34acff4a1c3affe6a9aed" gracePeriod=30 Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.905219 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="sg-core" containerID="cri-o://ed729075547917d8a415a9875cf82bce6c3ac43d3e2a2f19a7eb12f716d36b3c" gracePeriod=30 Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.905066 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="proxy-httpd" containerID="cri-o://a7af1adc720fee3ae2e69abfaea15a656aabbf840175e2eb9f1876b898234e6f" gracePeriod=30 Nov 23 07:11:17 crc kubenswrapper[5028]: I1123 07:11:17.929124 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.133864696 podStartE2EDuration="11.929105298s" podCreationTimestamp="2025-11-23 07:11:06 +0000 UTC" firstStartedPulling="2025-11-23 07:11:08.117782072 +0000 UTC m=+1251.815186851" lastFinishedPulling="2025-11-23 07:11:16.913022684 +0000 UTC m=+1260.610427453" observedRunningTime="2025-11-23 07:11:17.9275422 +0000 UTC m=+1261.624946979" watchObservedRunningTime="2025-11-23 07:11:17.929105298 +0000 UTC m=+1261.626510077" Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916506 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb568cc4-665e-4b25-aa53-c38d848de160" containerID="a7af1adc720fee3ae2e69abfaea15a656aabbf840175e2eb9f1876b898234e6f" exitCode=0 Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916543 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb568cc4-665e-4b25-aa53-c38d848de160" containerID="ed729075547917d8a415a9875cf82bce6c3ac43d3e2a2f19a7eb12f716d36b3c" exitCode=2 Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916553 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb568cc4-665e-4b25-aa53-c38d848de160" containerID="d253cc7be2871c20344f5349c407a022e0c05a1fffc34acff4a1c3affe6a9aed" exitCode=0 Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916563 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb568cc4-665e-4b25-aa53-c38d848de160" containerID="28e4f11fa7886562ef508b9f41f117c2e9f00675f4d16609aeeea581abb3e881" exitCode=0 Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerDied","Data":"a7af1adc720fee3ae2e69abfaea15a656aabbf840175e2eb9f1876b898234e6f"} Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916615 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerDied","Data":"ed729075547917d8a415a9875cf82bce6c3ac43d3e2a2f19a7eb12f716d36b3c"} Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916631 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerDied","Data":"d253cc7be2871c20344f5349c407a022e0c05a1fffc34acff4a1c3affe6a9aed"} Nov 23 07:11:18 crc kubenswrapper[5028]: I1123 07:11:18.916642 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerDied","Data":"28e4f11fa7886562ef508b9f41f117c2e9f00675f4d16609aeeea581abb3e881"} Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.204887 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.267368 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-log-httpd\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.267417 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-combined-ca-bundle\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.267457 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9c6x\" (UniqueName: \"kubernetes.io/projected/bb568cc4-665e-4b25-aa53-c38d848de160-kube-api-access-m9c6x\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.267479 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-config-data\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.268681 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.292309 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-run-httpd\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.292402 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-sg-core-conf-yaml\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.292875 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.293306 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-scripts\") pod \"bb568cc4-665e-4b25-aa53-c38d848de160\" (UID: \"bb568cc4-665e-4b25-aa53-c38d848de160\") " Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.294240 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.294262 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb568cc4-665e-4b25-aa53-c38d848de160-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.310811 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-scripts" (OuterVolumeSpecName: "scripts") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.312266 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb568cc4-665e-4b25-aa53-c38d848de160-kube-api-access-m9c6x" (OuterVolumeSpecName: "kube-api-access-m9c6x") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "kube-api-access-m9c6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.315965 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.317056 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-log" containerID="cri-o://be508a2dec08b87a3f5be29c9ff855aaffe2608171428f79c6f3f221f46cb9f6" gracePeriod=30 Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.317240 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-httpd" containerID="cri-o://69c8177af17d342a89729e9de14e1a959c4d823fa304d91c39b37495ccdcceae" gracePeriod=30 Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.324077 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.367519 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.396757 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.396791 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.396802 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9c6x\" (UniqueName: \"kubernetes.io/projected/bb568cc4-665e-4b25-aa53-c38d848de160-kube-api-access-m9c6x\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.396814 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.409109 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-config-data" (OuterVolumeSpecName: "config-data") pod "bb568cc4-665e-4b25-aa53-c38d848de160" (UID: "bb568cc4-665e-4b25-aa53-c38d848de160"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.498198 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb568cc4-665e-4b25-aa53-c38d848de160-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.928820 5028 generic.go:334] "Generic (PLEG): container finished" podID="7f520978-76fa-4bde-80df-dff8d693eb23" containerID="be508a2dec08b87a3f5be29c9ff855aaffe2608171428f79c6f3f221f46cb9f6" exitCode=143 Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.928896 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f520978-76fa-4bde-80df-dff8d693eb23","Type":"ContainerDied","Data":"be508a2dec08b87a3f5be29c9ff855aaffe2608171428f79c6f3f221f46cb9f6"} Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.933688 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bb568cc4-665e-4b25-aa53-c38d848de160","Type":"ContainerDied","Data":"a5ccefac6762e0eedfb6e2e6685408a53a93d44638db5a4376e465550b0f8c6c"} Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.933730 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.933746 5028 scope.go:117] "RemoveContainer" containerID="a7af1adc720fee3ae2e69abfaea15a656aabbf840175e2eb9f1876b898234e6f" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.937370 5028 generic.go:334] "Generic (PLEG): container finished" podID="77ffa103-e538-400a-b062-6e7f61425356" containerID="d0afb828e48dcf0fbd5d9267062f1cac16ef87b41adc197c4bbc8dea8bdc1980" exitCode=0 Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.937406 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68b9d958bb-2lrmv" event={"ID":"77ffa103-e538-400a-b062-6e7f61425356","Type":"ContainerDied","Data":"d0afb828e48dcf0fbd5d9267062f1cac16ef87b41adc197c4bbc8dea8bdc1980"} Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.960153 5028 scope.go:117] "RemoveContainer" containerID="ed729075547917d8a415a9875cf82bce6c3ac43d3e2a2f19a7eb12f716d36b3c" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.970017 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.979904 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995268 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995667 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed0b2201-09f6-4478-b00a-285dcd96ae12" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995683 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed0b2201-09f6-4478-b00a-285dcd96ae12" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995695 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-central-agent" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995701 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-central-agent" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995710 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff3de4d5-2fff-47f3-b769-5f1db4973efd" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995716 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff3de4d5-2fff-47f3-b769-5f1db4973efd" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995730 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="proxy-httpd" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995735 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="proxy-httpd" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995746 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373db2ec-bd55-424f-bf32-41e7107d8102" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995752 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="373db2ec-bd55-424f-bf32-41e7107d8102" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995770 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="sg-core" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995776 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="sg-core" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995788 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995793 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995803 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ad8dce-7019-4338-9916-70bee8bdcf00" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995808 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ad8dce-7019-4338-9916-70bee8bdcf00" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995821 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-notification-agent" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995827 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-notification-agent" Nov 23 07:11:19 crc kubenswrapper[5028]: E1123 07:11:19.995844 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e8708a8-c087-4267-96c6-2eaa00f1905d" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.995849 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e8708a8-c087-4267-96c6-2eaa00f1905d" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996051 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="proxy-httpd" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996064 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ad8dce-7019-4338-9916-70bee8bdcf00" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996074 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="sg-core" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996087 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-notification-agent" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996094 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996103 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" containerName="ceilometer-central-agent" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996109 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed0b2201-09f6-4478-b00a-285dcd96ae12" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996119 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff3de4d5-2fff-47f3-b769-5f1db4973efd" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996130 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="373db2ec-bd55-424f-bf32-41e7107d8102" containerName="mariadb-account-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.996140 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e8708a8-c087-4267-96c6-2eaa00f1905d" containerName="mariadb-database-create" Nov 23 07:11:19 crc kubenswrapper[5028]: I1123 07:11:19.997687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.002294 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.002331 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.012910 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.028401 5028 scope.go:117] "RemoveContainer" containerID="d253cc7be2871c20344f5349c407a022e0c05a1fffc34acff4a1c3affe6a9aed" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.063633 5028 scope.go:117] "RemoveContainer" containerID="28e4f11fa7886562ef508b9f41f117c2e9f00675f4d16609aeeea581abb3e881" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.106563 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-log-httpd\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.107491 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.107621 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8r7v\" (UniqueName: \"kubernetes.io/projected/cc4455e6-6afd-44fd-b290-8c6acce088ad-kube-api-access-v8r7v\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.107883 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-config-data\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.107933 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-scripts\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.108015 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-run-httpd\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.108143 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210072 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210134 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-log-httpd\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210196 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210232 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8r7v\" (UniqueName: \"kubernetes.io/projected/cc4455e6-6afd-44fd-b290-8c6acce088ad-kube-api-access-v8r7v\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210335 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-config-data\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210368 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-scripts\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.210399 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-run-httpd\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.211367 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-run-httpd\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.211373 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-log-httpd\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.226886 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-config-data\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.227250 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.229433 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.230547 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.231531 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-scripts\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.234653 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8r7v\" (UniqueName: \"kubernetes.io/projected/cc4455e6-6afd-44fd-b290-8c6acce088ad-kube-api-access-v8r7v\") pod \"ceilometer-0\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.311235 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-config\") pod \"77ffa103-e538-400a-b062-6e7f61425356\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.311355 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-combined-ca-bundle\") pod \"77ffa103-e538-400a-b062-6e7f61425356\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.311540 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-ovndb-tls-certs\") pod \"77ffa103-e538-400a-b062-6e7f61425356\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.311695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-httpd-config\") pod \"77ffa103-e538-400a-b062-6e7f61425356\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.311805 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shnhz\" (UniqueName: \"kubernetes.io/projected/77ffa103-e538-400a-b062-6e7f61425356-kube-api-access-shnhz\") pod \"77ffa103-e538-400a-b062-6e7f61425356\" (UID: \"77ffa103-e538-400a-b062-6e7f61425356\") " Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.318227 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ffa103-e538-400a-b062-6e7f61425356-kube-api-access-shnhz" (OuterVolumeSpecName: "kube-api-access-shnhz") pod "77ffa103-e538-400a-b062-6e7f61425356" (UID: "77ffa103-e538-400a-b062-6e7f61425356"). InnerVolumeSpecName "kube-api-access-shnhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.321441 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "77ffa103-e538-400a-b062-6e7f61425356" (UID: "77ffa103-e538-400a-b062-6e7f61425356"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.333352 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.364315 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77ffa103-e538-400a-b062-6e7f61425356" (UID: "77ffa103-e538-400a-b062-6e7f61425356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.376464 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-config" (OuterVolumeSpecName: "config") pod "77ffa103-e538-400a-b062-6e7f61425356" (UID: "77ffa103-e538-400a-b062-6e7f61425356"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.406489 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "77ffa103-e538-400a-b062-6e7f61425356" (UID: "77ffa103-e538-400a-b062-6e7f61425356"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.416552 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shnhz\" (UniqueName: \"kubernetes.io/projected/77ffa103-e538-400a-b062-6e7f61425356-kube-api-access-shnhz\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.416591 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.416605 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.416616 5028 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.416627 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/77ffa103-e538-400a-b062-6e7f61425356-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.801618 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.951462 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerStarted","Data":"238d0ea2a9df7ac52e95598448198a57f849e26b6c33d6f8246a77b09e6f43af"} Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.953382 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68b9d958bb-2lrmv" event={"ID":"77ffa103-e538-400a-b062-6e7f61425356","Type":"ContainerDied","Data":"83180198c6d66f92cf2a58da04644f3539e9629cd06186aa7da3839049558df2"} Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.953417 5028 scope.go:117] "RemoveContainer" containerID="8d8356abe87ad54a026437824775e8156d07f25ef4850d0444c87cb4d71ed0e2" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.953510 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68b9d958bb-2lrmv" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.989312 5028 scope.go:117] "RemoveContainer" containerID="d0afb828e48dcf0fbd5d9267062f1cac16ef87b41adc197c4bbc8dea8bdc1980" Nov 23 07:11:20 crc kubenswrapper[5028]: I1123 07:11:20.990320 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68b9d958bb-2lrmv"] Nov 23 07:11:21 crc kubenswrapper[5028]: I1123 07:11:21.000064 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68b9d958bb-2lrmv"] Nov 23 07:11:21 crc kubenswrapper[5028]: I1123 07:11:21.062252 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ffa103-e538-400a-b062-6e7f61425356" path="/var/lib/kubelet/pods/77ffa103-e538-400a-b062-6e7f61425356/volumes" Nov 23 07:11:21 crc kubenswrapper[5028]: I1123 07:11:21.063058 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb568cc4-665e-4b25-aa53-c38d848de160" path="/var/lib/kubelet/pods/bb568cc4-665e-4b25-aa53-c38d848de160/volumes" Nov 23 07:11:21 crc kubenswrapper[5028]: I1123 07:11:21.962502 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerStarted","Data":"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8"} Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.390293 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8v4x"] Nov 23 07:11:23 crc kubenswrapper[5028]: E1123 07:11:23.391206 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-httpd" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.391224 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-httpd" Nov 23 07:11:23 crc kubenswrapper[5028]: E1123 07:11:23.391283 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-api" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.391291 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-api" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.391503 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-httpd" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.391523 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ffa103-e538-400a-b062-6e7f61425356" containerName="neutron-api" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.392386 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.394422 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.395731 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4zjjs" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.396118 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.403065 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8v4x"] Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.573056 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-config-data\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.573130 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r977w\" (UniqueName: \"kubernetes.io/projected/78e0a689-cd74-4797-9d4f-8647ec86df48-kube-api-access-r977w\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.573169 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.573213 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-scripts\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.675437 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.675584 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-scripts\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.675749 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-config-data\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.675881 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r977w\" (UniqueName: \"kubernetes.io/projected/78e0a689-cd74-4797-9d4f-8647ec86df48-kube-api-access-r977w\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.687885 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-scripts\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.688227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.688179 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-config-data\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.695423 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r977w\" (UniqueName: \"kubernetes.io/projected/78e0a689-cd74-4797-9d4f-8647ec86df48-kube-api-access-r977w\") pod \"nova-cell0-conductor-db-sync-m8v4x\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:23 crc kubenswrapper[5028]: I1123 07:11:23.723569 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.001574 5028 generic.go:334] "Generic (PLEG): container finished" podID="7f520978-76fa-4bde-80df-dff8d693eb23" containerID="69c8177af17d342a89729e9de14e1a959c4d823fa304d91c39b37495ccdcceae" exitCode=0 Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.001813 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f520978-76fa-4bde-80df-dff8d693eb23","Type":"ContainerDied","Data":"69c8177af17d342a89729e9de14e1a959c4d823fa304d91c39b37495ccdcceae"} Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.225299 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8v4x"] Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.878911 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.888484 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.888697 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-log" containerID="cri-o://77fa97d7bf7f52ef3b1c18b87dc33585d5409dbe348234c57d70fa538bb5a744" gracePeriod=30 Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.888821 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-httpd" containerID="cri-o://ff5aef9adabfdd605326740c8d8a830c5c23f6cca3761f541107ce03bf8b3745" gracePeriod=30 Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904752 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-scripts\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904818 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-config-data\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904845 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88fds\" (UniqueName: \"kubernetes.io/projected/7f520978-76fa-4bde-80df-dff8d693eb23-kube-api-access-88fds\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904882 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-httpd-run\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904903 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904923 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-combined-ca-bundle\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.904983 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-logs\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.905006 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-public-tls-certs\") pod \"7f520978-76fa-4bde-80df-dff8d693eb23\" (UID: \"7f520978-76fa-4bde-80df-dff8d693eb23\") " Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.907657 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.909338 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-logs" (OuterVolumeSpecName: "logs") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.914109 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.960152 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-scripts" (OuterVolumeSpecName: "scripts") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:24 crc kubenswrapper[5028]: I1123 07:11:24.966132 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f520978-76fa-4bde-80df-dff8d693eb23-kube-api-access-88fds" (OuterVolumeSpecName: "kube-api-access-88fds") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "kube-api-access-88fds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.010317 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.010702 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.010713 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88fds\" (UniqueName: \"kubernetes.io/projected/7f520978-76fa-4bde-80df-dff8d693eb23-kube-api-access-88fds\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.010725 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f520978-76fa-4bde-80df-dff8d693eb23-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.010749 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.033257 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7f520978-76fa-4bde-80df-dff8d693eb23","Type":"ContainerDied","Data":"9722f5e92d3e38d3b40b17305614d252b46962c527f4050e1ace2d1588123fa6"} Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.033310 5028 scope.go:117] "RemoveContainer" containerID="69c8177af17d342a89729e9de14e1a959c4d823fa304d91c39b37495ccdcceae" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.033447 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.036218 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.038196 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" event={"ID":"78e0a689-cd74-4797-9d4f-8647ec86df48","Type":"ContainerStarted","Data":"fb22eca6ae8b8f8942bc0738381b08e32ff7173ab22d18e67d720fa3a2bbec0a"} Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.044410 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerStarted","Data":"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0"} Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.086675 5028 scope.go:117] "RemoveContainer" containerID="be508a2dec08b87a3f5be29c9ff855aaffe2608171428f79c6f3f221f46cb9f6" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.094246 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.095037 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.112906 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-config-data" (OuterVolumeSpecName: "config-data") pod "7f520978-76fa-4bde-80df-dff8d693eb23" (UID: "7f520978-76fa-4bde-80df-dff8d693eb23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.113076 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.113088 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.113097 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f520978-76fa-4bde-80df-dff8d693eb23-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.113106 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.375662 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.386540 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.400373 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:11:25 crc kubenswrapper[5028]: E1123 07:11:25.400813 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-log" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.400831 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-log" Nov 23 07:11:25 crc kubenswrapper[5028]: E1123 07:11:25.400873 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-httpd" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.400881 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-httpd" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.401168 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-httpd" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.401191 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" containerName="glance-log" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.402574 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.406357 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.406920 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.412311 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530069 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-logs\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530116 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530152 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-scripts\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530187 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flrdp\" (UniqueName: \"kubernetes.io/projected/9cf78ba6-9116-42cf-8be5-809dd912646c-kube-api-access-flrdp\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530222 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-config-data\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530273 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530295 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.530340 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634089 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-config-data\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634186 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634209 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634251 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634276 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-logs\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634291 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634323 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-scripts\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.634351 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flrdp\" (UniqueName: \"kubernetes.io/projected/9cf78ba6-9116-42cf-8be5-809dd912646c-kube-api-access-flrdp\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.636497 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.636728 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.637085 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-logs\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.643979 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-scripts\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.644850 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.645458 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.645768 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-config-data\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.656593 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flrdp\" (UniqueName: \"kubernetes.io/projected/9cf78ba6-9116-42cf-8be5-809dd912646c-kube-api-access-flrdp\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.670864 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " pod="openstack/glance-default-external-api-0" Nov 23 07:11:25 crc kubenswrapper[5028]: I1123 07:11:25.719436 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:11:26 crc kubenswrapper[5028]: I1123 07:11:26.054712 5028 generic.go:334] "Generic (PLEG): container finished" podID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerID="77fa97d7bf7f52ef3b1c18b87dc33585d5409dbe348234c57d70fa538bb5a744" exitCode=143 Nov 23 07:11:26 crc kubenswrapper[5028]: I1123 07:11:26.054959 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37","Type":"ContainerDied","Data":"77fa97d7bf7f52ef3b1c18b87dc33585d5409dbe348234c57d70fa538bb5a744"} Nov 23 07:11:26 crc kubenswrapper[5028]: I1123 07:11:26.056464 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerStarted","Data":"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc"} Nov 23 07:11:26 crc kubenswrapper[5028]: I1123 07:11:26.288180 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:11:26 crc kubenswrapper[5028]: W1123 07:11:26.288201 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cf78ba6_9116_42cf_8be5_809dd912646c.slice/crio-4be48cf3b8444c4789432751f133cb47bd1fc49c9dccce88ae2fa8170c2a95f0 WatchSource:0}: Error finding container 4be48cf3b8444c4789432751f133cb47bd1fc49c9dccce88ae2fa8170c2a95f0: Status 404 returned error can't find the container with id 4be48cf3b8444c4789432751f133cb47bd1fc49c9dccce88ae2fa8170c2a95f0 Nov 23 07:11:26 crc kubenswrapper[5028]: I1123 07:11:26.479177 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.066197 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f520978-76fa-4bde-80df-dff8d693eb23" path="/var/lib/kubelet/pods/7f520978-76fa-4bde-80df-dff8d693eb23/volumes" Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.079033 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9cf78ba6-9116-42cf-8be5-809dd912646c","Type":"ContainerStarted","Data":"8b01950aabfee244fdd553d635003949a329f35cf0bf54c41d11700015415cf0"} Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.079085 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9cf78ba6-9116-42cf-8be5-809dd912646c","Type":"ContainerStarted","Data":"4be48cf3b8444c4789432751f133cb47bd1fc49c9dccce88ae2fa8170c2a95f0"} Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.082896 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerStarted","Data":"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209"} Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.083068 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-central-agent" containerID="cri-o://a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" gracePeriod=30 Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.083143 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.083178 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="proxy-httpd" containerID="cri-o://783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" gracePeriod=30 Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.083213 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="sg-core" containerID="cri-o://36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" gracePeriod=30 Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.083246 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-notification-agent" containerID="cri-o://df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" gracePeriod=30 Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.161296 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.499268732 podStartE2EDuration="8.161258214s" podCreationTimestamp="2025-11-23 07:11:19 +0000 UTC" firstStartedPulling="2025-11-23 07:11:20.801979318 +0000 UTC m=+1264.499384097" lastFinishedPulling="2025-11-23 07:11:26.4639688 +0000 UTC m=+1270.161373579" observedRunningTime="2025-11-23 07:11:27.157253656 +0000 UTC m=+1270.854658435" watchObservedRunningTime="2025-11-23 07:11:27.161258214 +0000 UTC m=+1270.858662993" Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.865114 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981031 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-scripts\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981121 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8r7v\" (UniqueName: \"kubernetes.io/projected/cc4455e6-6afd-44fd-b290-8c6acce088ad-kube-api-access-v8r7v\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981150 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-combined-ca-bundle\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981231 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-config-data\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981277 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-log-httpd\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981338 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-run-httpd\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.981435 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-sg-core-conf-yaml\") pod \"cc4455e6-6afd-44fd-b290-8c6acce088ad\" (UID: \"cc4455e6-6afd-44fd-b290-8c6acce088ad\") " Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.982205 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:27 crc kubenswrapper[5028]: I1123 07:11:27.982553 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.003451 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4455e6-6afd-44fd-b290-8c6acce088ad-kube-api-access-v8r7v" (OuterVolumeSpecName: "kube-api-access-v8r7v") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "kube-api-access-v8r7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.017141 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-scripts" (OuterVolumeSpecName: "scripts") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.019775 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.051838 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.084131 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.084165 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.084176 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc4455e6-6afd-44fd-b290-8c6acce088ad-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.084187 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.084197 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.084207 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8r7v\" (UniqueName: \"kubernetes.io/projected/cc4455e6-6afd-44fd-b290-8c6acce088ad-kube-api-access-v8r7v\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.095519 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-config-data" (OuterVolumeSpecName: "config-data") pod "cc4455e6-6afd-44fd-b290-8c6acce088ad" (UID: "cc4455e6-6afd-44fd-b290-8c6acce088ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098289 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" exitCode=0 Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098319 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" exitCode=2 Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098328 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" exitCode=0 Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098340 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" exitCode=0 Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098394 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerDied","Data":"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209"} Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098422 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerDied","Data":"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc"} Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098434 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerDied","Data":"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0"} Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098446 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerDied","Data":"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8"} Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098457 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc4455e6-6afd-44fd-b290-8c6acce088ad","Type":"ContainerDied","Data":"238d0ea2a9df7ac52e95598448198a57f849e26b6c33d6f8246a77b09e6f43af"} Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098474 5028 scope.go:117] "RemoveContainer" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.098934 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.101240 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9cf78ba6-9116-42cf-8be5-809dd912646c","Type":"ContainerStarted","Data":"ac4ead67e47260cf3f74b78ab8afc2ccc45af0107cfaafca7edd1336fddcee80"} Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.131006 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.1309833400000002 podStartE2EDuration="3.13098334s" podCreationTimestamp="2025-11-23 07:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:28.121397747 +0000 UTC m=+1271.818802546" watchObservedRunningTime="2025-11-23 07:11:28.13098334 +0000 UTC m=+1271.828388119" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.133272 5028 scope.go:117] "RemoveContainer" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.185862 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4455e6-6afd-44fd-b290-8c6acce088ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.193756 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.203085 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.212446 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:28 crc kubenswrapper[5028]: E1123 07:11:28.213187 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-notification-agent" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.213271 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-notification-agent" Nov 23 07:11:28 crc kubenswrapper[5028]: E1123 07:11:28.213348 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="proxy-httpd" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.213408 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="proxy-httpd" Nov 23 07:11:28 crc kubenswrapper[5028]: E1123 07:11:28.213479 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="sg-core" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.213562 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="sg-core" Nov 23 07:11:28 crc kubenswrapper[5028]: E1123 07:11:28.213636 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-central-agent" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.213753 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-central-agent" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.214543 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-notification-agent" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.214649 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="proxy-httpd" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.214819 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="ceilometer-central-agent" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.214899 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" containerName="sg-core" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.217149 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.219449 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.222243 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.242694 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389066 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389130 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-scripts\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389178 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-log-httpd\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389246 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389279 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-run-httpd\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389361 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-config-data\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.389401 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz2n\" (UniqueName: \"kubernetes.io/projected/0ab95ac2-a075-4125-b45d-0b35ae03f09b-kube-api-access-ldz2n\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.490815 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.491183 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-scripts\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.491229 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-log-httpd\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.491253 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.491278 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-run-httpd\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.491327 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-config-data\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.491369 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldz2n\" (UniqueName: \"kubernetes.io/projected/0ab95ac2-a075-4125-b45d-0b35ae03f09b-kube-api-access-ldz2n\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.492191 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-run-httpd\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.492454 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-log-httpd\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.498457 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-scripts\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.498856 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.499677 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.501841 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-config-data\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.510318 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldz2n\" (UniqueName: \"kubernetes.io/projected/0ab95ac2-a075-4125-b45d-0b35ae03f09b-kube-api-access-ldz2n\") pod \"ceilometer-0\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " pod="openstack/ceilometer-0" Nov 23 07:11:28 crc kubenswrapper[5028]: I1123 07:11:28.543883 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:29 crc kubenswrapper[5028]: I1123 07:11:29.066307 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4455e6-6afd-44fd-b290-8c6acce088ad" path="/var/lib/kubelet/pods/cc4455e6-6afd-44fd-b290-8c6acce088ad/volumes" Nov 23 07:11:29 crc kubenswrapper[5028]: I1123 07:11:29.123582 5028 generic.go:334] "Generic (PLEG): container finished" podID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerID="ff5aef9adabfdd605326740c8d8a830c5c23f6cca3761f541107ce03bf8b3745" exitCode=0 Nov 23 07:11:29 crc kubenswrapper[5028]: I1123 07:11:29.123875 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37","Type":"ContainerDied","Data":"ff5aef9adabfdd605326740c8d8a830c5c23f6cca3761f541107ce03bf8b3745"} Nov 23 07:11:29 crc kubenswrapper[5028]: I1123 07:11:29.485714 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:30 crc kubenswrapper[5028]: I1123 07:11:30.946798 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:11:30 crc kubenswrapper[5028]: I1123 07:11:30.947097 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:11:30 crc kubenswrapper[5028]: I1123 07:11:30.947138 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:11:30 crc kubenswrapper[5028]: I1123 07:11:30.947806 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fad5b43d5150bb13c86e0639fa99a9b8f8c637943306815b8e96f42f58e277f1"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:11:30 crc kubenswrapper[5028]: I1123 07:11:30.947855 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://fad5b43d5150bb13c86e0639fa99a9b8f8c637943306815b8e96f42f58e277f1" gracePeriod=600 Nov 23 07:11:31 crc kubenswrapper[5028]: I1123 07:11:31.146852 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="fad5b43d5150bb13c86e0639fa99a9b8f8c637943306815b8e96f42f58e277f1" exitCode=0 Nov 23 07:11:31 crc kubenswrapper[5028]: I1123 07:11:31.146897 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"fad5b43d5150bb13c86e0639fa99a9b8f8c637943306815b8e96f42f58e277f1"} Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.763446 5028 scope.go:117] "RemoveContainer" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.885402 5028 scope.go:117] "RemoveContainer" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.914079 5028 scope.go:117] "RemoveContainer" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" Nov 23 07:11:32 crc kubenswrapper[5028]: E1123 07:11:32.914567 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": container with ID starting with 783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209 not found: ID does not exist" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.914622 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209"} err="failed to get container status \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": rpc error: code = NotFound desc = could not find container \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": container with ID starting with 783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.914652 5028 scope.go:117] "RemoveContainer" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" Nov 23 07:11:32 crc kubenswrapper[5028]: E1123 07:11:32.915180 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": container with ID starting with 36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc not found: ID does not exist" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.915216 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc"} err="failed to get container status \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": rpc error: code = NotFound desc = could not find container \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": container with ID starting with 36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.915242 5028 scope.go:117] "RemoveContainer" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" Nov 23 07:11:32 crc kubenswrapper[5028]: E1123 07:11:32.915522 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": container with ID starting with df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0 not found: ID does not exist" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.915556 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0"} err="failed to get container status \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": rpc error: code = NotFound desc = could not find container \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": container with ID starting with df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.915606 5028 scope.go:117] "RemoveContainer" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" Nov 23 07:11:32 crc kubenswrapper[5028]: E1123 07:11:32.918265 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": container with ID starting with a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8 not found: ID does not exist" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.918292 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8"} err="failed to get container status \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": rpc error: code = NotFound desc = could not find container \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": container with ID starting with a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.918311 5028 scope.go:117] "RemoveContainer" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.921325 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209"} err="failed to get container status \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": rpc error: code = NotFound desc = could not find container \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": container with ID starting with 783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.921357 5028 scope.go:117] "RemoveContainer" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.923392 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc"} err="failed to get container status \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": rpc error: code = NotFound desc = could not find container \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": container with ID starting with 36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.923421 5028 scope.go:117] "RemoveContainer" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.926399 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0"} err="failed to get container status \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": rpc error: code = NotFound desc = could not find container \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": container with ID starting with df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.926563 5028 scope.go:117] "RemoveContainer" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.927316 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8"} err="failed to get container status \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": rpc error: code = NotFound desc = could not find container \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": container with ID starting with a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.927348 5028 scope.go:117] "RemoveContainer" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.928052 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209"} err="failed to get container status \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": rpc error: code = NotFound desc = could not find container \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": container with ID starting with 783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.928069 5028 scope.go:117] "RemoveContainer" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.931097 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc"} err="failed to get container status \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": rpc error: code = NotFound desc = could not find container \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": container with ID starting with 36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.931121 5028 scope.go:117] "RemoveContainer" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.931860 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0"} err="failed to get container status \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": rpc error: code = NotFound desc = could not find container \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": container with ID starting with df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.931894 5028 scope.go:117] "RemoveContainer" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.935256 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8"} err="failed to get container status \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": rpc error: code = NotFound desc = could not find container \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": container with ID starting with a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.935283 5028 scope.go:117] "RemoveContainer" containerID="783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.936223 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209"} err="failed to get container status \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": rpc error: code = NotFound desc = could not find container \"783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209\": container with ID starting with 783bfccbddc3116996848afbb168746de623b22d362a0107b547e4a21afa2209 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.936247 5028 scope.go:117] "RemoveContainer" containerID="36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.936990 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc"} err="failed to get container status \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": rpc error: code = NotFound desc = could not find container \"36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc\": container with ID starting with 36a226395ef96ff8bcd5d154dce8a01ae8ef01e386089ce1f4c7f6433c7498dc not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.937013 5028 scope.go:117] "RemoveContainer" containerID="df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.938322 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0"} err="failed to get container status \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": rpc error: code = NotFound desc = could not find container \"df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0\": container with ID starting with df9698e4119add040e1f685b904d9b398dc3fe0fba0fb12afdca9fa24330d8b0 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.938346 5028 scope.go:117] "RemoveContainer" containerID="a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.938623 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8"} err="failed to get container status \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": rpc error: code = NotFound desc = could not find container \"a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8\": container with ID starting with a4bb8f0e02abbf853f5c36406f87a8669bc650ae8e573325e4bd5b57aa7105f8 not found: ID does not exist" Nov 23 07:11:32 crc kubenswrapper[5028]: I1123 07:11:32.938643 5028 scope.go:117] "RemoveContainer" containerID="153587bc7d2c56988e66c308bcfff99ff6e0585b23f02a263bcccf8ba46424dd" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.061294 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195283 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-combined-ca-bundle\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195355 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-config-data\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195428 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-logs\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195450 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhhdf\" (UniqueName: \"kubernetes.io/projected/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-kube-api-access-zhhdf\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195472 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195504 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-httpd-run\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195535 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-internal-tls-certs\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.195555 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-scripts\") pod \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\" (UID: \"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37\") " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.199193 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" event={"ID":"78e0a689-cd74-4797-9d4f-8647ec86df48","Type":"ContainerStarted","Data":"34a6dec9846a2576b2a4998da3347057d7253ffff2a03d02cf24883c4bb89960"} Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.200771 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-logs" (OuterVolumeSpecName: "logs") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.203942 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.204832 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-kube-api-access-zhhdf" (OuterVolumeSpecName: "kube-api-access-zhhdf") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "kube-api-access-zhhdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.205020 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-scripts" (OuterVolumeSpecName: "scripts") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.208636 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.212443 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37","Type":"ContainerDied","Data":"bc209808a3b9b76c9e40e464bda550080bf436c8be8b220d6841516b422e560f"} Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.212496 5028 scope.go:117] "RemoveContainer" containerID="ff5aef9adabfdd605326740c8d8a830c5c23f6cca3761f541107ce03bf8b3745" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.212613 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.219527 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"a9f3c8aba32cb3954ec9074865809a85ea7ad623b69b523ea2d4be3943bce523"} Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.230957 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" podStartSLOduration=1.651906103 podStartE2EDuration="10.230898407s" podCreationTimestamp="2025-11-23 07:11:23 +0000 UTC" firstStartedPulling="2025-11-23 07:11:24.230429535 +0000 UTC m=+1267.927834314" lastFinishedPulling="2025-11-23 07:11:32.809421839 +0000 UTC m=+1276.506826618" observedRunningTime="2025-11-23 07:11:33.21292772 +0000 UTC m=+1276.910332509" watchObservedRunningTime="2025-11-23 07:11:33.230898407 +0000 UTC m=+1276.928303186" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.247204 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.249032 5028 scope.go:117] "RemoveContainer" containerID="77fa97d7bf7f52ef3b1c18b87dc33585d5409dbe348234c57d70fa538bb5a744" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.262056 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.272753 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-config-data" (OuterVolumeSpecName: "config-data") pod "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" (UID: "e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303305 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303382 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303573 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303602 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhhdf\" (UniqueName: \"kubernetes.io/projected/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-kube-api-access-zhhdf\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303625 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303635 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303646 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.303657 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.321076 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.324267 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 23 07:11:33 crc kubenswrapper[5028]: W1123 07:11:33.325487 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab95ac2_a075_4125_b45d_0b35ae03f09b.slice/crio-6fe998934c5ecfef689ae09069bf7638f99fe3baf143276865ca9300d1509c42 WatchSource:0}: Error finding container 6fe998934c5ecfef689ae09069bf7638f99fe3baf143276865ca9300d1509c42: Status 404 returned error can't find the container with id 6fe998934c5ecfef689ae09069bf7638f99fe3baf143276865ca9300d1509c42 Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.405097 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.544003 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.558145 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.576117 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:11:33 crc kubenswrapper[5028]: E1123 07:11:33.576527 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-httpd" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.576547 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-httpd" Nov 23 07:11:33 crc kubenswrapper[5028]: E1123 07:11:33.576581 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-log" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.576589 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-log" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.576798 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-httpd" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.576838 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" containerName="glance-log" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.577811 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.581071 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.588020 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.594257 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.720846 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.720901 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.720931 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.721092 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.721199 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.721232 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.721609 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspc8\" (UniqueName: \"kubernetes.io/projected/7d35d26e-c3f7-4597-80c6-60358f2d2c21-kube-api-access-tspc8\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.721700 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823120 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tspc8\" (UniqueName: \"kubernetes.io/projected/7d35d26e-c3f7-4597-80c6-60358f2d2c21-kube-api-access-tspc8\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823431 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823537 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823584 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823623 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823648 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823675 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.823699 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.824009 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.824156 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.826022 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.829575 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.835865 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.836279 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.844149 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.852252 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tspc8\" (UniqueName: \"kubernetes.io/projected/7d35d26e-c3f7-4597-80c6-60358f2d2c21-kube-api-access-tspc8\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.866354 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " pod="openstack/glance-default-internal-api-0" Nov 23 07:11:33 crc kubenswrapper[5028]: I1123 07:11:33.904555 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:34 crc kubenswrapper[5028]: I1123 07:11:34.265211 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerStarted","Data":"7d682609923309a6becba9dd5f0f76758d7fa45ce25cea59cffd36b981f1ed62"} Nov 23 07:11:34 crc kubenswrapper[5028]: I1123 07:11:34.265501 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerStarted","Data":"6fe998934c5ecfef689ae09069bf7638f99fe3baf143276865ca9300d1509c42"} Nov 23 07:11:34 crc kubenswrapper[5028]: I1123 07:11:34.520598 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:11:34 crc kubenswrapper[5028]: W1123 07:11:34.533123 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d35d26e_c3f7_4597_80c6_60358f2d2c21.slice/crio-b9799ec3d2cd4efb9c885be833eb07aea0c04d7fe621cef5862759b070903940 WatchSource:0}: Error finding container b9799ec3d2cd4efb9c885be833eb07aea0c04d7fe621cef5862759b070903940: Status 404 returned error can't find the container with id b9799ec3d2cd4efb9c885be833eb07aea0c04d7fe621cef5862759b070903940 Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.078780 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37" path="/var/lib/kubelet/pods/e45b2b4f-3db3-4692-9eb6-c4ac5e07ed37/volumes" Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.279547 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerStarted","Data":"ddea271620eda0c7f0d6ef2844d3b3c1efabe6b03f6619d0c7dc086524b0068e"} Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.282307 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d35d26e-c3f7-4597-80c6-60358f2d2c21","Type":"ContainerStarted","Data":"8074037e96098a8a2eef2221bb33919ab66ce6682e6fcf1f6adb64b678e2bbed"} Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.282341 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d35d26e-c3f7-4597-80c6-60358f2d2c21","Type":"ContainerStarted","Data":"b9799ec3d2cd4efb9c885be833eb07aea0c04d7fe621cef5862759b070903940"} Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.721588 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.721913 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.749485 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:11:35 crc kubenswrapper[5028]: I1123 07:11:35.763576 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 07:11:36 crc kubenswrapper[5028]: I1123 07:11:36.292083 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerStarted","Data":"b308dd0f0cb8ca194c0a9a979344d0ba8268c85d6315e30ace2012cd3ca135ac"} Nov 23 07:11:36 crc kubenswrapper[5028]: I1123 07:11:36.294748 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d35d26e-c3f7-4597-80c6-60358f2d2c21","Type":"ContainerStarted","Data":"4217f079f64973f5810531f58f78f10d09ef89a5b6288de55515155677e95e0a"} Nov 23 07:11:36 crc kubenswrapper[5028]: I1123 07:11:36.295130 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:11:36 crc kubenswrapper[5028]: I1123 07:11:36.295220 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 07:11:36 crc kubenswrapper[5028]: I1123 07:11:36.326340 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.326318077 podStartE2EDuration="3.326318077s" podCreationTimestamp="2025-11-23 07:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:36.315808871 +0000 UTC m=+1280.013213660" watchObservedRunningTime="2025-11-23 07:11:36.326318077 +0000 UTC m=+1280.023722856" Nov 23 07:11:37 crc kubenswrapper[5028]: I1123 07:11:37.308274 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerStarted","Data":"4ce9b501e7bdb9490619570b4faf536a706311f5912aa564bed6cfadc354b152"} Nov 23 07:11:37 crc kubenswrapper[5028]: I1123 07:11:37.308715 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-central-agent" containerID="cri-o://7d682609923309a6becba9dd5f0f76758d7fa45ce25cea59cffd36b981f1ed62" gracePeriod=30 Nov 23 07:11:37 crc kubenswrapper[5028]: I1123 07:11:37.308728 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="sg-core" containerID="cri-o://b308dd0f0cb8ca194c0a9a979344d0ba8268c85d6315e30ace2012cd3ca135ac" gracePeriod=30 Nov 23 07:11:37 crc kubenswrapper[5028]: I1123 07:11:37.308751 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-notification-agent" containerID="cri-o://ddea271620eda0c7f0d6ef2844d3b3c1efabe6b03f6619d0c7dc086524b0068e" gracePeriod=30 Nov 23 07:11:37 crc kubenswrapper[5028]: I1123 07:11:37.308728 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="proxy-httpd" containerID="cri-o://4ce9b501e7bdb9490619570b4faf536a706311f5912aa564bed6cfadc354b152" gracePeriod=30 Nov 23 07:11:37 crc kubenswrapper[5028]: I1123 07:11:37.337652 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.8196257110000005 podStartE2EDuration="9.337636266s" podCreationTimestamp="2025-11-23 07:11:28 +0000 UTC" firstStartedPulling="2025-11-23 07:11:33.32851599 +0000 UTC m=+1277.025920769" lastFinishedPulling="2025-11-23 07:11:36.846526545 +0000 UTC m=+1280.543931324" observedRunningTime="2025-11-23 07:11:37.334874659 +0000 UTC m=+1281.032279438" watchObservedRunningTime="2025-11-23 07:11:37.337636266 +0000 UTC m=+1281.035041045" Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.229343 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.253721 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.322216 5028 generic.go:334] "Generic (PLEG): container finished" podID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerID="4ce9b501e7bdb9490619570b4faf536a706311f5912aa564bed6cfadc354b152" exitCode=0 Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.322247 5028 generic.go:334] "Generic (PLEG): container finished" podID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerID="b308dd0f0cb8ca194c0a9a979344d0ba8268c85d6315e30ace2012cd3ca135ac" exitCode=2 Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.322255 5028 generic.go:334] "Generic (PLEG): container finished" podID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerID="ddea271620eda0c7f0d6ef2844d3b3c1efabe6b03f6619d0c7dc086524b0068e" exitCode=0 Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.322284 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerDied","Data":"4ce9b501e7bdb9490619570b4faf536a706311f5912aa564bed6cfadc354b152"} Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.322323 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerDied","Data":"b308dd0f0cb8ca194c0a9a979344d0ba8268c85d6315e30ace2012cd3ca135ac"} Nov 23 07:11:38 crc kubenswrapper[5028]: I1123 07:11:38.322337 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerDied","Data":"ddea271620eda0c7f0d6ef2844d3b3c1efabe6b03f6619d0c7dc086524b0068e"} Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.386711 5028 generic.go:334] "Generic (PLEG): container finished" podID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerID="7d682609923309a6becba9dd5f0f76758d7fa45ce25cea59cffd36b981f1ed62" exitCode=0 Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.386777 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerDied","Data":"7d682609923309a6becba9dd5f0f76758d7fa45ce25cea59cffd36b981f1ed62"} Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.600154 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739328 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-scripts\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739398 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-sg-core-conf-yaml\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739437 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-config-data\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739484 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-combined-ca-bundle\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739577 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-log-httpd\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739636 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-run-httpd\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.739688 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldz2n\" (UniqueName: \"kubernetes.io/projected/0ab95ac2-a075-4125-b45d-0b35ae03f09b-kube-api-access-ldz2n\") pod \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\" (UID: \"0ab95ac2-a075-4125-b45d-0b35ae03f09b\") " Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.741185 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.741478 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.747790 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab95ac2-a075-4125-b45d-0b35ae03f09b-kube-api-access-ldz2n" (OuterVolumeSpecName: "kube-api-access-ldz2n") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "kube-api-access-ldz2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.748335 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-scripts" (OuterVolumeSpecName: "scripts") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.795435 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.842053 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.842087 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.842101 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.842113 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ab95ac2-a075-4125-b45d-0b35ae03f09b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.842124 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldz2n\" (UniqueName: \"kubernetes.io/projected/0ab95ac2-a075-4125-b45d-0b35ae03f09b-kube-api-access-ldz2n\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.846361 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.846983 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-config-data" (OuterVolumeSpecName: "config-data") pod "0ab95ac2-a075-4125-b45d-0b35ae03f09b" (UID: "0ab95ac2-a075-4125-b45d-0b35ae03f09b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.905696 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.906097 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.935194 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.943597 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.943620 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab95ac2-a075-4125-b45d-0b35ae03f09b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:43 crc kubenswrapper[5028]: I1123 07:11:43.961566 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.398472 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ab95ac2-a075-4125-b45d-0b35ae03f09b","Type":"ContainerDied","Data":"6fe998934c5ecfef689ae09069bf7638f99fe3baf143276865ca9300d1509c42"} Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.398535 5028 scope.go:117] "RemoveContainer" containerID="4ce9b501e7bdb9490619570b4faf536a706311f5912aa564bed6cfadc354b152" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.398684 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.401641 5028 generic.go:334] "Generic (PLEG): container finished" podID="78e0a689-cd74-4797-9d4f-8647ec86df48" containerID="34a6dec9846a2576b2a4998da3347057d7253ffff2a03d02cf24883c4bb89960" exitCode=0 Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.402723 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" event={"ID":"78e0a689-cd74-4797-9d4f-8647ec86df48","Type":"ContainerDied","Data":"34a6dec9846a2576b2a4998da3347057d7253ffff2a03d02cf24883c4bb89960"} Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.402756 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.402935 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.463881 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.465372 5028 scope.go:117] "RemoveContainer" containerID="b308dd0f0cb8ca194c0a9a979344d0ba8268c85d6315e30ace2012cd3ca135ac" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.478798 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.500202 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:44 crc kubenswrapper[5028]: E1123 07:11:44.500666 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="sg-core" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.500687 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="sg-core" Nov 23 07:11:44 crc kubenswrapper[5028]: E1123 07:11:44.500708 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-notification-agent" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.500718 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-notification-agent" Nov 23 07:11:44 crc kubenswrapper[5028]: E1123 07:11:44.500741 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-central-agent" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.500748 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-central-agent" Nov 23 07:11:44 crc kubenswrapper[5028]: E1123 07:11:44.500761 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="proxy-httpd" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.500768 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="proxy-httpd" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.501106 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-central-agent" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.501140 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="ceilometer-notification-agent" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.501161 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="proxy-httpd" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.501179 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" containerName="sg-core" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.503710 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.509436 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.509825 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.519313 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.526332 5028 scope.go:117] "RemoveContainer" containerID="ddea271620eda0c7f0d6ef2844d3b3c1efabe6b03f6619d0c7dc086524b0068e" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.548387 5028 scope.go:117] "RemoveContainer" containerID="7d682609923309a6becba9dd5f0f76758d7fa45ce25cea59cffd36b981f1ed62" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553622 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-scripts\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553691 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-config-data\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553758 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-log-httpd\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553783 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-run-httpd\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553806 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553832 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.553921 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76wzf\" (UniqueName: \"kubernetes.io/projected/604796de-b46b-48b6-a9ae-bebdecff4609-kube-api-access-76wzf\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.655542 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76wzf\" (UniqueName: \"kubernetes.io/projected/604796de-b46b-48b6-a9ae-bebdecff4609-kube-api-access-76wzf\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.655853 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-scripts\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.656701 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-config-data\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.656938 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-log-httpd\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.657109 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-run-httpd\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.657266 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.658578 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.657652 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-run-httpd\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.657387 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-log-httpd\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.660877 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.661911 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.667927 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-scripts\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.672221 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76wzf\" (UniqueName: \"kubernetes.io/projected/604796de-b46b-48b6-a9ae-bebdecff4609-kube-api-access-76wzf\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.680392 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-config-data\") pod \"ceilometer-0\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " pod="openstack/ceilometer-0" Nov 23 07:11:44 crc kubenswrapper[5028]: I1123 07:11:44.832831 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.065320 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab95ac2-a075-4125-b45d-0b35ae03f09b" path="/var/lib/kubelet/pods/0ab95ac2-a075-4125-b45d-0b35ae03f09b/volumes" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.343131 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.412260 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerStarted","Data":"53188307914ce8ef0c8125dbee09f2fb2e9a10253714bf5847f2a68d3125ea54"} Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.762562 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.888610 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r977w\" (UniqueName: \"kubernetes.io/projected/78e0a689-cd74-4797-9d4f-8647ec86df48-kube-api-access-r977w\") pod \"78e0a689-cd74-4797-9d4f-8647ec86df48\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.888716 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-scripts\") pod \"78e0a689-cd74-4797-9d4f-8647ec86df48\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.888859 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-combined-ca-bundle\") pod \"78e0a689-cd74-4797-9d4f-8647ec86df48\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.888887 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-config-data\") pod \"78e0a689-cd74-4797-9d4f-8647ec86df48\" (UID: \"78e0a689-cd74-4797-9d4f-8647ec86df48\") " Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.892981 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e0a689-cd74-4797-9d4f-8647ec86df48-kube-api-access-r977w" (OuterVolumeSpecName: "kube-api-access-r977w") pod "78e0a689-cd74-4797-9d4f-8647ec86df48" (UID: "78e0a689-cd74-4797-9d4f-8647ec86df48"). InnerVolumeSpecName "kube-api-access-r977w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.893325 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-scripts" (OuterVolumeSpecName: "scripts") pod "78e0a689-cd74-4797-9d4f-8647ec86df48" (UID: "78e0a689-cd74-4797-9d4f-8647ec86df48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.917164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e0a689-cd74-4797-9d4f-8647ec86df48" (UID: "78e0a689-cd74-4797-9d4f-8647ec86df48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.925853 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-config-data" (OuterVolumeSpecName: "config-data") pod "78e0a689-cd74-4797-9d4f-8647ec86df48" (UID: "78e0a689-cd74-4797-9d4f-8647ec86df48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.991386 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r977w\" (UniqueName: \"kubernetes.io/projected/78e0a689-cd74-4797-9d4f-8647ec86df48-kube-api-access-r977w\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.991423 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.991432 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:45 crc kubenswrapper[5028]: I1123 07:11:45.991440 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e0a689-cd74-4797-9d4f-8647ec86df48-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.352068 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.422530 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerStarted","Data":"d801f00cd19fbf528af6b558d94371b98c462f082cf66d4bfd77a3e4a242d5eb"} Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.424276 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.424272 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" event={"ID":"78e0a689-cd74-4797-9d4f-8647ec86df48","Type":"ContainerDied","Data":"fb22eca6ae8b8f8942bc0738381b08e32ff7173ab22d18e67d720fa3a2bbec0a"} Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.424326 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb22eca6ae8b8f8942bc0738381b08e32ff7173ab22d18e67d720fa3a2bbec0a" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.424282 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8v4x" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.439433 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.557386 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:11:46 crc kubenswrapper[5028]: E1123 07:11:46.558085 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e0a689-cd74-4797-9d4f-8647ec86df48" containerName="nova-cell0-conductor-db-sync" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.558096 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e0a689-cd74-4797-9d4f-8647ec86df48" containerName="nova-cell0-conductor-db-sync" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.558297 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e0a689-cd74-4797-9d4f-8647ec86df48" containerName="nova-cell0-conductor-db-sync" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.558873 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.567592 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4zjjs" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.568319 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.578038 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.701664 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cq2\" (UniqueName: \"kubernetes.io/projected/daff2282-19f8-48c7-8d1b-780fbe97ec5a-kube-api-access-g9cq2\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.701713 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.701752 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.803174 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cq2\" (UniqueName: \"kubernetes.io/projected/daff2282-19f8-48c7-8d1b-780fbe97ec5a-kube-api-access-g9cq2\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.803438 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.803478 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.810990 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.812494 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.823620 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cq2\" (UniqueName: \"kubernetes.io/projected/daff2282-19f8-48c7-8d1b-780fbe97ec5a-kube-api-access-g9cq2\") pod \"nova-cell0-conductor-0\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:46 crc kubenswrapper[5028]: I1123 07:11:46.914001 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:47 crc kubenswrapper[5028]: I1123 07:11:47.393585 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:11:47 crc kubenswrapper[5028]: W1123 07:11:47.400089 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaff2282_19f8_48c7_8d1b_780fbe97ec5a.slice/crio-ff9e6978564e48b0c5e0fe28e06b5c777dfa825bcf4f62818eecfad25096338e WatchSource:0}: Error finding container ff9e6978564e48b0c5e0fe28e06b5c777dfa825bcf4f62818eecfad25096338e: Status 404 returned error can't find the container with id ff9e6978564e48b0c5e0fe28e06b5c777dfa825bcf4f62818eecfad25096338e Nov 23 07:11:47 crc kubenswrapper[5028]: I1123 07:11:47.435614 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerStarted","Data":"b3a362a131b827c3ce2f0f339a320e607dd85b506ccd2e7765967071f8135415"} Nov 23 07:11:47 crc kubenswrapper[5028]: I1123 07:11:47.435657 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerStarted","Data":"ff20eb6f31a8f3af5fe92a56df00232c3a186ab116ef1d068710d6eb07025526"} Nov 23 07:11:47 crc kubenswrapper[5028]: I1123 07:11:47.438985 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"daff2282-19f8-48c7-8d1b-780fbe97ec5a","Type":"ContainerStarted","Data":"ff9e6978564e48b0c5e0fe28e06b5c777dfa825bcf4f62818eecfad25096338e"} Nov 23 07:11:48 crc kubenswrapper[5028]: I1123 07:11:48.451055 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"daff2282-19f8-48c7-8d1b-780fbe97ec5a","Type":"ContainerStarted","Data":"718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725"} Nov 23 07:11:48 crc kubenswrapper[5028]: I1123 07:11:48.451464 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:48 crc kubenswrapper[5028]: I1123 07:11:48.470801 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.470786461 podStartE2EDuration="2.470786461s" podCreationTimestamp="2025-11-23 07:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:48.470033872 +0000 UTC m=+1292.167438651" watchObservedRunningTime="2025-11-23 07:11:48.470786461 +0000 UTC m=+1292.168191240" Nov 23 07:11:49 crc kubenswrapper[5028]: I1123 07:11:49.463178 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerStarted","Data":"1cca987ba31c26d37c23b81a8ad83b55324ce618e4dde1dc1159757fdcd16c13"} Nov 23 07:11:49 crc kubenswrapper[5028]: I1123 07:11:49.489296 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.581431324 podStartE2EDuration="5.489278594s" podCreationTimestamp="2025-11-23 07:11:44 +0000 UTC" firstStartedPulling="2025-11-23 07:11:45.323076699 +0000 UTC m=+1289.020481508" lastFinishedPulling="2025-11-23 07:11:48.230923999 +0000 UTC m=+1291.928328778" observedRunningTime="2025-11-23 07:11:49.485217975 +0000 UTC m=+1293.182622744" watchObservedRunningTime="2025-11-23 07:11:49.489278594 +0000 UTC m=+1293.186683373" Nov 23 07:11:50 crc kubenswrapper[5028]: I1123 07:11:50.476250 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:11:56 crc kubenswrapper[5028]: I1123 07:11:56.959708 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.387668 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nnh6w"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.393702 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.396118 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.396297 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.398743 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nnh6w"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.518288 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-config-data\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.518398 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnwx9\" (UniqueName: \"kubernetes.io/projected/0625eec8-1472-4c19-8ebd-c2a9260a5231-kube-api-access-lnwx9\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.518431 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-scripts\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.518501 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.617087 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.618932 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.623359 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-config-data\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.623443 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnwx9\" (UniqueName: \"kubernetes.io/projected/0625eec8-1472-4c19-8ebd-c2a9260a5231-kube-api-access-lnwx9\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.623466 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-scripts\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.623525 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.629793 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.636643 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-config-data\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.640238 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.656628 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnwx9\" (UniqueName: \"kubernetes.io/projected/0625eec8-1472-4c19-8ebd-c2a9260a5231-kube-api-access-lnwx9\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.657864 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-scripts\") pod \"nova-cell0-cell-mapping-nnh6w\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.664544 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.667099 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.673075 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.716914 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.730914 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.731029 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.731465 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-config-data\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.731553 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-logs\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.731626 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ckd\" (UniqueName: \"kubernetes.io/projected/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-kube-api-access-77ckd\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.758700 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.832891 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-config-data\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.832933 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-logs\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.832982 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77ckd\" (UniqueName: \"kubernetes.io/projected/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-kube-api-access-77ckd\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.833056 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnzt\" (UniqueName: \"kubernetes.io/projected/a12a485c-957d-4ae6-967b-c9243b7792e9-kube-api-access-dxnzt\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.833080 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.833098 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a12a485c-957d-4ae6-967b-c9243b7792e9-logs\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.833126 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.833147 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-config-data\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.834088 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-logs\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.844583 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-config-data\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.845318 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.851734 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.852885 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.858847 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.862774 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ckd\" (UniqueName: \"kubernetes.io/projected/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-kube-api-access-77ckd\") pod \"nova-api-0\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " pod="openstack/nova-api-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.866418 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.884334 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64dbf5859c-gm6vb"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.885960 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.904167 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64dbf5859c-gm6vb"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.930694 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.931663 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.934797 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnzt\" (UniqueName: \"kubernetes.io/projected/a12a485c-957d-4ae6-967b-c9243b7792e9-kube-api-access-dxnzt\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.934835 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.934861 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a12a485c-957d-4ae6-967b-c9243b7792e9-logs\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.934878 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-config-data\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.934924 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.935039 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdmj\" (UniqueName: \"kubernetes.io/projected/fe69df49-0822-44c8-b362-fdb7f0683f88-kube-api-access-ffdmj\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.935098 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.936290 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a12a485c-957d-4ae6-967b-c9243b7792e9-logs\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.939776 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.940036 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.942511 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-config-data\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.950017 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:11:57 crc kubenswrapper[5028]: I1123 07:11:57.956470 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnzt\" (UniqueName: \"kubernetes.io/projected/a12a485c-957d-4ae6-967b-c9243b7792e9-kube-api-access-dxnzt\") pod \"nova-metadata-0\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " pod="openstack/nova-metadata-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.033981 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.036713 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69rjp\" (UniqueName: \"kubernetes.io/projected/83c035a0-1f60-4649-bace-86aa5ee413ce-kube-api-access-69rjp\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.036778 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.036847 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-svc\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.036887 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffdmj\" (UniqueName: \"kubernetes.io/projected/fe69df49-0822-44c8-b362-fdb7f0683f88-kube-api-access-ffdmj\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.036958 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-sb\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.036990 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-config\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.037011 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-swift-storage-0\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.037035 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.037067 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-nb\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.037691 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm5rl\" (UniqueName: \"kubernetes.io/projected/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-kube-api-access-vm5rl\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.037803 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.038096 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-config-data\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.044180 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.044619 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.057373 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffdmj\" (UniqueName: \"kubernetes.io/projected/fe69df49-0822-44c8-b362-fdb7f0683f88-kube-api-access-ffdmj\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.072154 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.139403 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm5rl\" (UniqueName: \"kubernetes.io/projected/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-kube-api-access-vm5rl\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.139752 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.139774 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-config-data\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.139825 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69rjp\" (UniqueName: \"kubernetes.io/projected/83c035a0-1f60-4649-bace-86aa5ee413ce-kube-api-access-69rjp\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.139883 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-svc\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.140728 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-sb\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.140780 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-config\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.140798 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-swift-storage-0\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.140830 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-nb\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.142750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-sb\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.144523 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-svc\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.144755 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-swift-storage-0\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.145062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-nb\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.145090 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-config\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.145871 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.152367 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-config-data\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.156350 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm5rl\" (UniqueName: \"kubernetes.io/projected/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-kube-api-access-vm5rl\") pod \"nova-scheduler-0\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.156863 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69rjp\" (UniqueName: \"kubernetes.io/projected/83c035a0-1f60-4649-bace-86aa5ee413ce-kube-api-access-69rjp\") pod \"dnsmasq-dns-64dbf5859c-gm6vb\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.255011 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.268581 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.278226 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.380724 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nnh6w"] Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.556470 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nnh6w" event={"ID":"0625eec8-1472-4c19-8ebd-c2a9260a5231","Type":"ContainerStarted","Data":"e16b5907f2e9f354287194ba1afd474f019f0880a45849095ee12111e1863485"} Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.557790 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-db4rd"] Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.559084 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.561404 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.561605 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.576153 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-db4rd"] Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.587915 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:11:58 crc kubenswrapper[5028]: W1123 07:11:58.589595 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69b8f2fd_fc67_4c34_9255_cf040d4dfecc.slice/crio-e499df9d77031ee29cce883ccceb9db59d338038c4a627e3a2d055649cab5769 WatchSource:0}: Error finding container e499df9d77031ee29cce883ccceb9db59d338038c4a627e3a2d055649cab5769: Status 404 returned error can't find the container with id e499df9d77031ee29cce883ccceb9db59d338038c4a627e3a2d055649cab5769 Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.653231 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-config-data\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.653346 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8djd\" (UniqueName: \"kubernetes.io/projected/f844edf7-0721-4de7-a55d-615ede8fa93a-kube-api-access-g8djd\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.653366 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-scripts\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.653439 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.703587 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.754721 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.754834 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-config-data\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.754927 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8djd\" (UniqueName: \"kubernetes.io/projected/f844edf7-0721-4de7-a55d-615ede8fa93a-kube-api-access-g8djd\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.754972 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-scripts\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.762541 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-config-data\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.764333 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-scripts\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.764734 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.783603 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8djd\" (UniqueName: \"kubernetes.io/projected/f844edf7-0721-4de7-a55d-615ede8fa93a-kube-api-access-g8djd\") pod \"nova-cell1-conductor-db-sync-db4rd\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:58 crc kubenswrapper[5028]: I1123 07:11:58.891280 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.076163 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.106000 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64dbf5859c-gm6vb"] Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.130398 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.358521 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-db4rd"] Nov 23 07:11:59 crc kubenswrapper[5028]: W1123 07:11:59.369643 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf844edf7_0721_4de7_a55d_615ede8fa93a.slice/crio-b5c699b53d2bd9afb20ca3e8217fae77a99ef3c029a02114245c58ce88c3cb86 WatchSource:0}: Error finding container b5c699b53d2bd9afb20ca3e8217fae77a99ef3c029a02114245c58ce88c3cb86: Status 404 returned error can't find the container with id b5c699b53d2bd9afb20ca3e8217fae77a99ef3c029a02114245c58ce88c3cb86 Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.569081 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nnh6w" event={"ID":"0625eec8-1472-4c19-8ebd-c2a9260a5231","Type":"ContainerStarted","Data":"552e38d88689ec46fa8727e87ef0816f6ef787b7b5c68938daf09f72f26e9a9f"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.587593 5028 generic.go:334] "Generic (PLEG): container finished" podID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerID="caedb149b29e09c90cd8368ec2f32d3b903eaf6fed6262d942c66c394ec827bb" exitCode=0 Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.587684 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" event={"ID":"83c035a0-1f60-4649-bace-86aa5ee413ce","Type":"ContainerDied","Data":"caedb149b29e09c90cd8368ec2f32d3b903eaf6fed6262d942c66c394ec827bb"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.587708 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" event={"ID":"83c035a0-1f60-4649-bace-86aa5ee413ce","Type":"ContainerStarted","Data":"c719bbec86bdbf319cbd3a617102b5a293b22a27f6046d3868e2630b80f305dc"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.592811 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe69df49-0822-44c8-b362-fdb7f0683f88","Type":"ContainerStarted","Data":"ea0cfd6a4ffdf1a3b9a3f1d9c9e6d783e5abee4545c8b004e34e0751572f3c69"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.595858 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69b8f2fd-fc67-4c34-9255-cf040d4dfecc","Type":"ContainerStarted","Data":"e499df9d77031ee29cce883ccceb9db59d338038c4a627e3a2d055649cab5769"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.599184 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a","Type":"ContainerStarted","Data":"0fae13be9ac2054cf94535b323601acf59064d07d6890c6f1f65a3fe7ec64122"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.601864 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-db4rd" event={"ID":"f844edf7-0721-4de7-a55d-615ede8fa93a","Type":"ContainerStarted","Data":"22b4e39ada1d711841f2d5ab6596639fc374030997ad8dac32ca318f11a99634"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.601894 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-db4rd" event={"ID":"f844edf7-0721-4de7-a55d-615ede8fa93a","Type":"ContainerStarted","Data":"b5c699b53d2bd9afb20ca3e8217fae77a99ef3c029a02114245c58ce88c3cb86"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.610849 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a12a485c-957d-4ae6-967b-c9243b7792e9","Type":"ContainerStarted","Data":"d8d64b4b218e8a4d184f4b45396276112019aeb33241012d3df6282f9034b6e9"} Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.611873 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nnh6w" podStartSLOduration=2.611862784 podStartE2EDuration="2.611862784s" podCreationTimestamp="2025-11-23 07:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:59.5875452 +0000 UTC m=+1303.284949979" watchObservedRunningTime="2025-11-23 07:11:59.611862784 +0000 UTC m=+1303.309267563" Nov 23 07:11:59 crc kubenswrapper[5028]: I1123 07:11:59.632856 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-db4rd" podStartSLOduration=1.632829567 podStartE2EDuration="1.632829567s" podCreationTimestamp="2025-11-23 07:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:11:59.624874913 +0000 UTC m=+1303.322279712" watchObservedRunningTime="2025-11-23 07:11:59.632829567 +0000 UTC m=+1303.330234346" Nov 23 07:12:00 crc kubenswrapper[5028]: I1123 07:12:00.648182 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" event={"ID":"83c035a0-1f60-4649-bace-86aa5ee413ce","Type":"ContainerStarted","Data":"d347353db33af1d6ba4d84c69e24716a8023002479762dc4dc845b00b4cdd85d"} Nov 23 07:12:00 crc kubenswrapper[5028]: I1123 07:12:00.651565 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:12:00 crc kubenswrapper[5028]: I1123 07:12:00.668228 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" podStartSLOduration=3.668212276 podStartE2EDuration="3.668212276s" podCreationTimestamp="2025-11-23 07:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:00.663189453 +0000 UTC m=+1304.360594232" watchObservedRunningTime="2025-11-23 07:12:00.668212276 +0000 UTC m=+1304.365617055" Nov 23 07:12:01 crc kubenswrapper[5028]: I1123 07:12:01.714208 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:12:01 crc kubenswrapper[5028]: I1123 07:12:01.727612 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.690327 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a12a485c-957d-4ae6-967b-c9243b7792e9","Type":"ContainerStarted","Data":"00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2"} Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.690590 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a12a485c-957d-4ae6-967b-c9243b7792e9","Type":"ContainerStarted","Data":"baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f"} Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.690459 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-metadata" containerID="cri-o://00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2" gracePeriod=30 Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.690397 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-log" containerID="cri-o://baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f" gracePeriod=30 Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.693714 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe69df49-0822-44c8-b362-fdb7f0683f88","Type":"ContainerStarted","Data":"6f3ee00baa666ce4b0a62f5838f506fd3532211c0c7576d534a5dbad1491e360"} Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.693788 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="fe69df49-0822-44c8-b362-fdb7f0683f88" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6f3ee00baa666ce4b0a62f5838f506fd3532211c0c7576d534a5dbad1491e360" gracePeriod=30 Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.696646 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69b8f2fd-fc67-4c34-9255-cf040d4dfecc","Type":"ContainerStarted","Data":"65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12"} Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.696688 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69b8f2fd-fc67-4c34-9255-cf040d4dfecc","Type":"ContainerStarted","Data":"2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7"} Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.698439 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a","Type":"ContainerStarted","Data":"4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a"} Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.714358 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.332619458 podStartE2EDuration="5.714338684s" podCreationTimestamp="2025-11-23 07:11:57 +0000 UTC" firstStartedPulling="2025-11-23 07:11:58.729327711 +0000 UTC m=+1302.426732490" lastFinishedPulling="2025-11-23 07:12:02.111046937 +0000 UTC m=+1305.808451716" observedRunningTime="2025-11-23 07:12:02.712972971 +0000 UTC m=+1306.410377750" watchObservedRunningTime="2025-11-23 07:12:02.714338684 +0000 UTC m=+1306.411743463" Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.731557 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.214656425 podStartE2EDuration="5.731539545s" podCreationTimestamp="2025-11-23 07:11:57 +0000 UTC" firstStartedPulling="2025-11-23 07:11:58.595559141 +0000 UTC m=+1302.292963920" lastFinishedPulling="2025-11-23 07:12:02.112442261 +0000 UTC m=+1305.809847040" observedRunningTime="2025-11-23 07:12:02.729807162 +0000 UTC m=+1306.427211951" watchObservedRunningTime="2025-11-23 07:12:02.731539545 +0000 UTC m=+1306.428944324" Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.745878 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.755391974 podStartE2EDuration="5.745861705s" podCreationTimestamp="2025-11-23 07:11:57 +0000 UTC" firstStartedPulling="2025-11-23 07:11:59.126208473 +0000 UTC m=+1302.823613252" lastFinishedPulling="2025-11-23 07:12:02.116678204 +0000 UTC m=+1305.814082983" observedRunningTime="2025-11-23 07:12:02.743235381 +0000 UTC m=+1306.440640160" watchObservedRunningTime="2025-11-23 07:12:02.745861705 +0000 UTC m=+1306.443266484" Nov 23 07:12:02 crc kubenswrapper[5028]: I1123 07:12:02.762498 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.73032256 podStartE2EDuration="5.762480071s" podCreationTimestamp="2025-11-23 07:11:57 +0000 UTC" firstStartedPulling="2025-11-23 07:11:59.080451344 +0000 UTC m=+1302.777856123" lastFinishedPulling="2025-11-23 07:12:02.112608855 +0000 UTC m=+1305.810013634" observedRunningTime="2025-11-23 07:12:02.758063803 +0000 UTC m=+1306.455468582" watchObservedRunningTime="2025-11-23 07:12:02.762480071 +0000 UTC m=+1306.459884850" Nov 23 07:12:03 crc kubenswrapper[5028]: I1123 07:12:03.072519 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:12:03 crc kubenswrapper[5028]: I1123 07:12:03.072571 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:12:03 crc kubenswrapper[5028]: I1123 07:12:03.256141 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:03 crc kubenswrapper[5028]: I1123 07:12:03.278463 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:12:03 crc kubenswrapper[5028]: I1123 07:12:03.710569 5028 generic.go:334] "Generic (PLEG): container finished" podID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerID="baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f" exitCode=143 Nov 23 07:12:03 crc kubenswrapper[5028]: I1123 07:12:03.710660 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a12a485c-957d-4ae6-967b-c9243b7792e9","Type":"ContainerDied","Data":"baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f"} Nov 23 07:12:06 crc kubenswrapper[5028]: I1123 07:12:06.755009 5028 generic.go:334] "Generic (PLEG): container finished" podID="0625eec8-1472-4c19-8ebd-c2a9260a5231" containerID="552e38d88689ec46fa8727e87ef0816f6ef787b7b5c68938daf09f72f26e9a9f" exitCode=0 Nov 23 07:12:06 crc kubenswrapper[5028]: I1123 07:12:06.755081 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nnh6w" event={"ID":"0625eec8-1472-4c19-8ebd-c2a9260a5231","Type":"ContainerDied","Data":"552e38d88689ec46fa8727e87ef0816f6ef787b7b5c68938daf09f72f26e9a9f"} Nov 23 07:12:06 crc kubenswrapper[5028]: I1123 07:12:06.757470 5028 generic.go:334] "Generic (PLEG): container finished" podID="f844edf7-0721-4de7-a55d-615ede8fa93a" containerID="22b4e39ada1d711841f2d5ab6596639fc374030997ad8dac32ca318f11a99634" exitCode=0 Nov 23 07:12:06 crc kubenswrapper[5028]: I1123 07:12:06.757502 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-db4rd" event={"ID":"f844edf7-0721-4de7-a55d-615ede8fa93a","Type":"ContainerDied","Data":"22b4e39ada1d711841f2d5ab6596639fc374030997ad8dac32ca318f11a99634"} Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.034560 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.034978 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.232206 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.238461 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.270232 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.278993 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.327862 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.342203 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7965876c4f-zc5ww"] Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.342504 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerName="dnsmasq-dns" containerID="cri-o://9dfd92e623f40c9acc280dbb212734247aa57f06bb3da53e198bc52bfab450bb" gracePeriod=10 Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.350778 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-config-data\") pod \"f844edf7-0721-4de7-a55d-615ede8fa93a\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.350822 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-combined-ca-bundle\") pod \"f844edf7-0721-4de7-a55d-615ede8fa93a\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.350842 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8djd\" (UniqueName: \"kubernetes.io/projected/f844edf7-0721-4de7-a55d-615ede8fa93a-kube-api-access-g8djd\") pod \"f844edf7-0721-4de7-a55d-615ede8fa93a\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.350876 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-scripts\") pod \"f844edf7-0721-4de7-a55d-615ede8fa93a\" (UID: \"f844edf7-0721-4de7-a55d-615ede8fa93a\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.350910 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-config-data\") pod \"0625eec8-1472-4c19-8ebd-c2a9260a5231\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.351039 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-scripts\") pod \"0625eec8-1472-4c19-8ebd-c2a9260a5231\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.351075 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnwx9\" (UniqueName: \"kubernetes.io/projected/0625eec8-1472-4c19-8ebd-c2a9260a5231-kube-api-access-lnwx9\") pod \"0625eec8-1472-4c19-8ebd-c2a9260a5231\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.351132 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-combined-ca-bundle\") pod \"0625eec8-1472-4c19-8ebd-c2a9260a5231\" (UID: \"0625eec8-1472-4c19-8ebd-c2a9260a5231\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.359925 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-scripts" (OuterVolumeSpecName: "scripts") pod "f844edf7-0721-4de7-a55d-615ede8fa93a" (UID: "f844edf7-0721-4de7-a55d-615ede8fa93a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.371300 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f844edf7-0721-4de7-a55d-615ede8fa93a-kube-api-access-g8djd" (OuterVolumeSpecName: "kube-api-access-g8djd") pod "f844edf7-0721-4de7-a55d-615ede8fa93a" (UID: "f844edf7-0721-4de7-a55d-615ede8fa93a"). InnerVolumeSpecName "kube-api-access-g8djd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.372122 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0625eec8-1472-4c19-8ebd-c2a9260a5231-kube-api-access-lnwx9" (OuterVolumeSpecName: "kube-api-access-lnwx9") pod "0625eec8-1472-4c19-8ebd-c2a9260a5231" (UID: "0625eec8-1472-4c19-8ebd-c2a9260a5231"). InnerVolumeSpecName "kube-api-access-lnwx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.407061 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0625eec8-1472-4c19-8ebd-c2a9260a5231" (UID: "0625eec8-1472-4c19-8ebd-c2a9260a5231"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.413660 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-scripts" (OuterVolumeSpecName: "scripts") pod "0625eec8-1472-4c19-8ebd-c2a9260a5231" (UID: "0625eec8-1472-4c19-8ebd-c2a9260a5231"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.416978 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-config-data" (OuterVolumeSpecName: "config-data") pod "0625eec8-1472-4c19-8ebd-c2a9260a5231" (UID: "0625eec8-1472-4c19-8ebd-c2a9260a5231"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.426252 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-config-data" (OuterVolumeSpecName: "config-data") pod "f844edf7-0721-4de7-a55d-615ede8fa93a" (UID: "f844edf7-0721-4de7-a55d-615ede8fa93a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.429108 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f844edf7-0721-4de7-a55d-615ede8fa93a" (UID: "f844edf7-0721-4de7-a55d-615ede8fa93a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453584 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453627 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453641 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8djd\" (UniqueName: \"kubernetes.io/projected/f844edf7-0721-4de7-a55d-615ede8fa93a-kube-api-access-g8djd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453654 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f844edf7-0721-4de7-a55d-615ede8fa93a-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453666 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453676 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453687 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnwx9\" (UniqueName: \"kubernetes.io/projected/0625eec8-1472-4c19-8ebd-c2a9260a5231-kube-api-access-lnwx9\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.453699 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0625eec8-1472-4c19-8ebd-c2a9260a5231-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.816232 5028 generic.go:334] "Generic (PLEG): container finished" podID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerID="9dfd92e623f40c9acc280dbb212734247aa57f06bb3da53e198bc52bfab450bb" exitCode=0 Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.816619 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" event={"ID":"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc","Type":"ContainerDied","Data":"9dfd92e623f40c9acc280dbb212734247aa57f06bb3da53e198bc52bfab450bb"} Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.836488 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-db4rd" event={"ID":"f844edf7-0721-4de7-a55d-615ede8fa93a","Type":"ContainerDied","Data":"b5c699b53d2bd9afb20ca3e8217fae77a99ef3c029a02114245c58ce88c3cb86"} Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.836530 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5c699b53d2bd9afb20ca3e8217fae77a99ef3c029a02114245c58ce88c3cb86" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.836594 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-db4rd" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.856262 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nnh6w" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.857081 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nnh6w" event={"ID":"0625eec8-1472-4c19-8ebd-c2a9260a5231","Type":"ContainerDied","Data":"e16b5907f2e9f354287194ba1afd474f019f0880a45849095ee12111e1863485"} Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.857117 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e16b5907f2e9f354287194ba1afd474f019f0880a45849095ee12111e1863485" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.892788 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.909142 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.968406 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-svc\") pod \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.968518 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-config\") pod \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.968560 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-nb\") pod \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.968628 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rtqp\" (UniqueName: \"kubernetes.io/projected/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-kube-api-access-2rtqp\") pod \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.968728 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-sb\") pod \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.968798 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-swift-storage-0\") pod \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\" (UID: \"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc\") " Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.973277 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-kube-api-access-2rtqp" (OuterVolumeSpecName: "kube-api-access-2rtqp") pod "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" (UID: "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc"). InnerVolumeSpecName "kube-api-access-2rtqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978032 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:12:08 crc kubenswrapper[5028]: E1123 07:12:08.978439 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerName="dnsmasq-dns" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978451 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerName="dnsmasq-dns" Nov 23 07:12:08 crc kubenswrapper[5028]: E1123 07:12:08.978464 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f844edf7-0721-4de7-a55d-615ede8fa93a" containerName="nova-cell1-conductor-db-sync" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978470 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f844edf7-0721-4de7-a55d-615ede8fa93a" containerName="nova-cell1-conductor-db-sync" Nov 23 07:12:08 crc kubenswrapper[5028]: E1123 07:12:08.978492 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0625eec8-1472-4c19-8ebd-c2a9260a5231" containerName="nova-manage" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978497 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0625eec8-1472-4c19-8ebd-c2a9260a5231" containerName="nova-manage" Nov 23 07:12:08 crc kubenswrapper[5028]: E1123 07:12:08.978511 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerName="init" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978519 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerName="init" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978700 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" containerName="dnsmasq-dns" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978724 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0625eec8-1472-4c19-8ebd-c2a9260a5231" containerName="nova-manage" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.978736 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f844edf7-0721-4de7-a55d-615ede8fa93a" containerName="nova-cell1-conductor-db-sync" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.979328 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:08 crc kubenswrapper[5028]: I1123 07:12:08.983174 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.015921 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.023067 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.023353 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-log" containerID="cri-o://2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7" gracePeriod=30 Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.023443 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-api" containerID="cri-o://65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12" gracePeriod=30 Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.036291 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.036319 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.070737 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.070812 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.070902 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782dx\" (UniqueName: \"kubernetes.io/projected/7ea19991-fd9e-4f02-a48f-e6bc67848e43-kube-api-access-782dx\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.071016 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rtqp\" (UniqueName: \"kubernetes.io/projected/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-kube-api-access-2rtqp\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.076087 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-config" (OuterVolumeSpecName: "config") pod "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" (UID: "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.077556 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" (UID: "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.078586 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" (UID: "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.080265 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" (UID: "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.094975 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" (UID: "ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172244 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782dx\" (UniqueName: \"kubernetes.io/projected/7ea19991-fd9e-4f02-a48f-e6bc67848e43-kube-api-access-782dx\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172543 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172653 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172809 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172833 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172846 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172858 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.172872 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.178852 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.179338 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.194620 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782dx\" (UniqueName: \"kubernetes.io/projected/7ea19991-fd9e-4f02-a48f-e6bc67848e43-kube-api-access-782dx\") pod \"nova-cell1-conductor-0\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.312687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.406931 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.808333 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.870489 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ea19991-fd9e-4f02-a48f-e6bc67848e43","Type":"ContainerStarted","Data":"c137446641ac0423cf8ec6d13ee8140245a2c709921537283a41f1791c29c3f7"} Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.874853 5028 generic.go:334] "Generic (PLEG): container finished" podID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerID="2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7" exitCode=143 Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.874940 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69b8f2fd-fc67-4c34-9255-cf040d4dfecc","Type":"ContainerDied","Data":"2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7"} Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.877140 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.877764 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7965876c4f-zc5ww" event={"ID":"ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc","Type":"ContainerDied","Data":"7a51766c92650d962dec57474ab596fd2795403eec39270bcf555ffb0f98866d"} Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.877842 5028 scope.go:117] "RemoveContainer" containerID="9dfd92e623f40c9acc280dbb212734247aa57f06bb3da53e198bc52bfab450bb" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.917195 5028 scope.go:117] "RemoveContainer" containerID="ac1e24102c8e305045c9774cd73e786db0592b89ca49b65c672608607a6701fa" Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.932524 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7965876c4f-zc5ww"] Nov 23 07:12:09 crc kubenswrapper[5028]: I1123 07:12:09.940642 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7965876c4f-zc5ww"] Nov 23 07:12:10 crc kubenswrapper[5028]: I1123 07:12:10.888996 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ea19991-fd9e-4f02-a48f-e6bc67848e43","Type":"ContainerStarted","Data":"a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116"} Nov 23 07:12:10 crc kubenswrapper[5028]: I1123 07:12:10.889612 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:10 crc kubenswrapper[5028]: I1123 07:12:10.892148 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" containerName="nova-scheduler-scheduler" containerID="cri-o://4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a" gracePeriod=30 Nov 23 07:12:10 crc kubenswrapper[5028]: I1123 07:12:10.914474 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.914453656 podStartE2EDuration="2.914453656s" podCreationTimestamp="2025-11-23 07:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:10.909605307 +0000 UTC m=+1314.607010126" watchObservedRunningTime="2025-11-23 07:12:10.914453656 +0000 UTC m=+1314.611858435" Nov 23 07:12:11 crc kubenswrapper[5028]: I1123 07:12:11.069743 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc" path="/var/lib/kubelet/pods/ba44f44b-bfd6-4d41-b5b4-e58f8fcde8bc/volumes" Nov 23 07:12:13 crc kubenswrapper[5028]: E1123 07:12:13.281603 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:12:13 crc kubenswrapper[5028]: E1123 07:12:13.284025 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:12:13 crc kubenswrapper[5028]: E1123 07:12:13.286460 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:12:13 crc kubenswrapper[5028]: E1123 07:12:13.286517 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" containerName="nova-scheduler-scheduler" Nov 23 07:12:13 crc kubenswrapper[5028]: I1123 07:12:13.918901 5028 generic.go:334] "Generic (PLEG): container finished" podID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" containerID="4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a" exitCode=0 Nov 23 07:12:13 crc kubenswrapper[5028]: I1123 07:12:13.918960 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a","Type":"ContainerDied","Data":"4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a"} Nov 23 07:12:13 crc kubenswrapper[5028]: I1123 07:12:13.919009 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a","Type":"ContainerDied","Data":"0fae13be9ac2054cf94535b323601acf59064d07d6890c6f1f65a3fe7ec64122"} Nov 23 07:12:13 crc kubenswrapper[5028]: I1123 07:12:13.919030 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fae13be9ac2054cf94535b323601acf59064d07d6890c6f1f65a3fe7ec64122" Nov 23 07:12:13 crc kubenswrapper[5028]: I1123 07:12:13.976479 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.076363 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-config-data\") pod \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.077055 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm5rl\" (UniqueName: \"kubernetes.io/projected/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-kube-api-access-vm5rl\") pod \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.077130 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-combined-ca-bundle\") pod \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\" (UID: \"558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a\") " Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.084029 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-kube-api-access-vm5rl" (OuterVolumeSpecName: "kube-api-access-vm5rl") pod "558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" (UID: "558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a"). InnerVolumeSpecName "kube-api-access-vm5rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.105098 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-config-data" (OuterVolumeSpecName: "config-data") pod "558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" (UID: "558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.110731 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" (UID: "558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.179641 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.179679 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.179695 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm5rl\" (UniqueName: \"kubernetes.io/projected/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a-kube-api-access-vm5rl\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.340270 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.854505 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.913087 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.935913 5028 generic.go:334] "Generic (PLEG): container finished" podID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerID="65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12" exitCode=0 Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.935981 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.936075 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.935975 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69b8f2fd-fc67-4c34-9255-cf040d4dfecc","Type":"ContainerDied","Data":"65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12"} Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.936153 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"69b8f2fd-fc67-4c34-9255-cf040d4dfecc","Type":"ContainerDied","Data":"e499df9d77031ee29cce883ccceb9db59d338038c4a627e3a2d055649cab5769"} Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.936180 5028 scope.go:117] "RemoveContainer" containerID="65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.970476 5028 scope.go:117] "RemoveContainer" containerID="2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7" Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.979586 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:14 crc kubenswrapper[5028]: I1123 07:12:14.994349 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:14.999022 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-logs\") pod \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:14.999082 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77ckd\" (UniqueName: \"kubernetes.io/projected/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-kube-api-access-77ckd\") pod \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:14.999232 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-combined-ca-bundle\") pod \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:14.999385 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-config-data\") pod \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\" (UID: \"69b8f2fd-fc67-4c34-9255-cf040d4dfecc\") " Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.003187 5028 scope.go:117] "RemoveContainer" containerID="65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12" Nov 23 07:12:15 crc kubenswrapper[5028]: E1123 07:12:15.017144 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12\": container with ID starting with 65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12 not found: ID does not exist" containerID="65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.017237 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12"} err="failed to get container status \"65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12\": rpc error: code = NotFound desc = could not find container \"65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12\": container with ID starting with 65c688dd835068bf68375954d4c1069d3632f8117cc5fafee4b8a9be2b351f12 not found: ID does not exist" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.017271 5028 scope.go:117] "RemoveContainer" containerID="2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.017601 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-logs" (OuterVolumeSpecName: "logs") pod "69b8f2fd-fc67-4c34-9255-cf040d4dfecc" (UID: "69b8f2fd-fc67-4c34-9255-cf040d4dfecc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:15 crc kubenswrapper[5028]: E1123 07:12:15.018489 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7\": container with ID starting with 2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7 not found: ID does not exist" containerID="2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.018517 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7"} err="failed to get container status \"2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7\": rpc error: code = NotFound desc = could not find container \"2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7\": container with ID starting with 2048483ebdfe250b45bebd06663aa4c5e1c2b6c74ad8de530e54e12db964dcc7 not found: ID does not exist" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.027307 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-kube-api-access-77ckd" (OuterVolumeSpecName: "kube-api-access-77ckd") pod "69b8f2fd-fc67-4c34-9255-cf040d4dfecc" (UID: "69b8f2fd-fc67-4c34-9255-cf040d4dfecc"). InnerVolumeSpecName "kube-api-access-77ckd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.036059 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: E1123 07:12:15.036716 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" containerName="nova-scheduler-scheduler" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.036739 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" containerName="nova-scheduler-scheduler" Nov 23 07:12:15 crc kubenswrapper[5028]: E1123 07:12:15.036766 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-api" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.036773 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-api" Nov 23 07:12:15 crc kubenswrapper[5028]: E1123 07:12:15.036798 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-log" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.036808 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-log" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.037158 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-api" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.037189 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" containerName="nova-scheduler-scheduler" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.037215 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" containerName="nova-api-log" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.038131 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.040697 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.046089 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-config-data" (OuterVolumeSpecName: "config-data") pod "69b8f2fd-fc67-4c34-9255-cf040d4dfecc" (UID: "69b8f2fd-fc67-4c34-9255-cf040d4dfecc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.049512 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.066431 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a" path="/var/lib/kubelet/pods/558e4e68-d809-4b6e-aba3-0e4ab6bf2b5a/volumes" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.087084 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69b8f2fd-fc67-4c34-9255-cf040d4dfecc" (UID: "69b8f2fd-fc67-4c34-9255-cf040d4dfecc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.103728 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-config-data\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.103794 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.104083 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqqt8\" (UniqueName: \"kubernetes.io/projected/54b90a4e-034d-4c8d-bf93-ed27f5467b32-kube-api-access-fqqt8\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.104215 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.104233 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.104243 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77ckd\" (UniqueName: \"kubernetes.io/projected/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-kube-api-access-77ckd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.104254 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b8f2fd-fc67-4c34-9255-cf040d4dfecc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.206490 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqqt8\" (UniqueName: \"kubernetes.io/projected/54b90a4e-034d-4c8d-bf93-ed27f5467b32-kube-api-access-fqqt8\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.206650 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-config-data\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.206691 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.211275 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.211277 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-config-data\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.220482 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqqt8\" (UniqueName: \"kubernetes.io/projected/54b90a4e-034d-4c8d-bf93-ed27f5467b32-kube-api-access-fqqt8\") pod \"nova-scheduler-0\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.268603 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.278513 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.289841 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.291923 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.295325 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.298889 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.310558 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tp5q\" (UniqueName: \"kubernetes.io/projected/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-kube-api-access-2tp5q\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.310794 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-logs\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.310993 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.311131 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-config-data\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.413367 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-logs\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.413710 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.413764 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-config-data\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.413808 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-logs\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.413816 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tp5q\" (UniqueName: \"kubernetes.io/projected/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-kube-api-access-2tp5q\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.419728 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.420517 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-config-data\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.426903 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.431902 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tp5q\" (UniqueName: \"kubernetes.io/projected/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-kube-api-access-2tp5q\") pod \"nova-api-0\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.622821 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.850016 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:15 crc kubenswrapper[5028]: I1123 07:12:15.946295 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54b90a4e-034d-4c8d-bf93-ed27f5467b32","Type":"ContainerStarted","Data":"6731bc3f10bf50f12b836e9b9fe562d7ff948c204dea908ece1c925856a80014"} Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.033965 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:16 crc kubenswrapper[5028]: W1123 07:12:16.037261 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb16d3c3b_327c_4565_9e13_ad8ff67f0a52.slice/crio-2b300ac8a35dcdcd558dcceca6be5ecbeef0f28541c2bc99e99a7130ebcf8cd9 WatchSource:0}: Error finding container 2b300ac8a35dcdcd558dcceca6be5ecbeef0f28541c2bc99e99a7130ebcf8cd9: Status 404 returned error can't find the container with id 2b300ac8a35dcdcd558dcceca6be5ecbeef0f28541c2bc99e99a7130ebcf8cd9 Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.958493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b16d3c3b-327c-4565-9e13-ad8ff67f0a52","Type":"ContainerStarted","Data":"8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35"} Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.959086 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b16d3c3b-327c-4565-9e13-ad8ff67f0a52","Type":"ContainerStarted","Data":"8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3"} Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.959100 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b16d3c3b-327c-4565-9e13-ad8ff67f0a52","Type":"ContainerStarted","Data":"2b300ac8a35dcdcd558dcceca6be5ecbeef0f28541c2bc99e99a7130ebcf8cd9"} Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.962863 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54b90a4e-034d-4c8d-bf93-ed27f5467b32","Type":"ContainerStarted","Data":"9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81"} Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.979410 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.979387922 podStartE2EDuration="1.979387922s" podCreationTimestamp="2025-11-23 07:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:16.978586833 +0000 UTC m=+1320.675991622" watchObservedRunningTime="2025-11-23 07:12:16.979387922 +0000 UTC m=+1320.676792721" Nov 23 07:12:16 crc kubenswrapper[5028]: I1123 07:12:16.996722 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.996701805 podStartE2EDuration="2.996701805s" podCreationTimestamp="2025-11-23 07:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:16.990896554 +0000 UTC m=+1320.688301343" watchObservedRunningTime="2025-11-23 07:12:16.996701805 +0000 UTC m=+1320.694106594" Nov 23 07:12:17 crc kubenswrapper[5028]: I1123 07:12:17.075227 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b8f2fd-fc67-4c34-9255-cf040d4dfecc" path="/var/lib/kubelet/pods/69b8f2fd-fc67-4c34-9255-cf040d4dfecc/volumes" Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.414587 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.415070 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="41d6b075-2987-431a-b6d2-2842bc5726de" containerName="kube-state-metrics" containerID="cri-o://42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432" gracePeriod=30 Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.928929 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.981925 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49gcd\" (UniqueName: \"kubernetes.io/projected/41d6b075-2987-431a-b6d2-2842bc5726de-kube-api-access-49gcd\") pod \"41d6b075-2987-431a-b6d2-2842bc5726de\" (UID: \"41d6b075-2987-431a-b6d2-2842bc5726de\") " Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.987066 5028 generic.go:334] "Generic (PLEG): container finished" podID="41d6b075-2987-431a-b6d2-2842bc5726de" containerID="42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432" exitCode=2 Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.987145 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41d6b075-2987-431a-b6d2-2842bc5726de","Type":"ContainerDied","Data":"42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432"} Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.987203 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"41d6b075-2987-431a-b6d2-2842bc5726de","Type":"ContainerDied","Data":"d3672ae7d44eeea7030585574b4aeda23e9423701dcce98b2fa5fa236d280435"} Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.987225 5028 scope.go:117] "RemoveContainer" containerID="42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432" Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.987475 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:12:18 crc kubenswrapper[5028]: I1123 07:12:18.995102 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d6b075-2987-431a-b6d2-2842bc5726de-kube-api-access-49gcd" (OuterVolumeSpecName: "kube-api-access-49gcd") pod "41d6b075-2987-431a-b6d2-2842bc5726de" (UID: "41d6b075-2987-431a-b6d2-2842bc5726de"). InnerVolumeSpecName "kube-api-access-49gcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.041911 5028 scope.go:117] "RemoveContainer" containerID="42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432" Nov 23 07:12:19 crc kubenswrapper[5028]: E1123 07:12:19.042380 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432\": container with ID starting with 42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432 not found: ID does not exist" containerID="42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.042416 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432"} err="failed to get container status \"42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432\": rpc error: code = NotFound desc = could not find container \"42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432\": container with ID starting with 42e32981564561afce5a70a77b313e4d9306c597f7fa4a15dda7b2a56ab4e432 not found: ID does not exist" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.084275 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49gcd\" (UniqueName: \"kubernetes.io/projected/41d6b075-2987-431a-b6d2-2842bc5726de-kube-api-access-49gcd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.307938 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.314980 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.330078 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:12:19 crc kubenswrapper[5028]: E1123 07:12:19.330491 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41d6b075-2987-431a-b6d2-2842bc5726de" containerName="kube-state-metrics" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.330510 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="41d6b075-2987-431a-b6d2-2842bc5726de" containerName="kube-state-metrics" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.330911 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="41d6b075-2987-431a-b6d2-2842bc5726de" containerName="kube-state-metrics" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.331627 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.333937 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.334146 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.340656 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.389180 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.389276 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.389335 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mfw5\" (UniqueName: \"kubernetes.io/projected/9ad676ea-95d4-483f-a7d7-574744376b19-kube-api-access-4mfw5\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.389401 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.491215 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mfw5\" (UniqueName: \"kubernetes.io/projected/9ad676ea-95d4-483f-a7d7-574744376b19-kube-api-access-4mfw5\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.492195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.492395 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.492562 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.498278 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.499372 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.503453 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.523241 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mfw5\" (UniqueName: \"kubernetes.io/projected/9ad676ea-95d4-483f-a7d7-574744376b19-kube-api-access-4mfw5\") pod \"kube-state-metrics-0\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.653002 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.936432 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:12:19 crc kubenswrapper[5028]: W1123 07:12:19.940352 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ad676ea_95d4_483f_a7d7_574744376b19.slice/crio-b563de787451a8636f01c99714129d3eab76086d7039663230781fbfb5166b6b WatchSource:0}: Error finding container b563de787451a8636f01c99714129d3eab76086d7039663230781fbfb5166b6b: Status 404 returned error can't find the container with id b563de787451a8636f01c99714129d3eab76086d7039663230781fbfb5166b6b Nov 23 07:12:19 crc kubenswrapper[5028]: I1123 07:12:19.997321 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ad676ea-95d4-483f-a7d7-574744376b19","Type":"ContainerStarted","Data":"b563de787451a8636f01c99714129d3eab76086d7039663230781fbfb5166b6b"} Nov 23 07:12:20 crc kubenswrapper[5028]: I1123 07:12:20.130051 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:20 crc kubenswrapper[5028]: I1123 07:12:20.130773 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="proxy-httpd" containerID="cri-o://1cca987ba31c26d37c23b81a8ad83b55324ce618e4dde1dc1159757fdcd16c13" gracePeriod=30 Nov 23 07:12:20 crc kubenswrapper[5028]: I1123 07:12:20.130918 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="sg-core" containerID="cri-o://b3a362a131b827c3ce2f0f339a320e607dd85b506ccd2e7765967071f8135415" gracePeriod=30 Nov 23 07:12:20 crc kubenswrapper[5028]: I1123 07:12:20.130997 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-notification-agent" containerID="cri-o://ff20eb6f31a8f3af5fe92a56df00232c3a186ab116ef1d068710d6eb07025526" gracePeriod=30 Nov 23 07:12:20 crc kubenswrapper[5028]: I1123 07:12:20.131086 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-central-agent" containerID="cri-o://d801f00cd19fbf528af6b558d94371b98c462f082cf66d4bfd77a3e4a242d5eb" gracePeriod=30 Nov 23 07:12:20 crc kubenswrapper[5028]: I1123 07:12:20.427024 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.007870 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ad676ea-95d4-483f-a7d7-574744376b19","Type":"ContainerStarted","Data":"efc762443a33b269b97ec4b3cde54d3c8c727b78e4871cb2e7039c7badde7203"} Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.008266 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.010443 5028 generic.go:334] "Generic (PLEG): container finished" podID="604796de-b46b-48b6-a9ae-bebdecff4609" containerID="1cca987ba31c26d37c23b81a8ad83b55324ce618e4dde1dc1159757fdcd16c13" exitCode=0 Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.010464 5028 generic.go:334] "Generic (PLEG): container finished" podID="604796de-b46b-48b6-a9ae-bebdecff4609" containerID="b3a362a131b827c3ce2f0f339a320e607dd85b506ccd2e7765967071f8135415" exitCode=2 Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.010477 5028 generic.go:334] "Generic (PLEG): container finished" podID="604796de-b46b-48b6-a9ae-bebdecff4609" containerID="d801f00cd19fbf528af6b558d94371b98c462f082cf66d4bfd77a3e4a242d5eb" exitCode=0 Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.010490 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerDied","Data":"1cca987ba31c26d37c23b81a8ad83b55324ce618e4dde1dc1159757fdcd16c13"} Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.010534 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerDied","Data":"b3a362a131b827c3ce2f0f339a320e607dd85b506ccd2e7765967071f8135415"} Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.010549 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerDied","Data":"d801f00cd19fbf528af6b558d94371b98c462f082cf66d4bfd77a3e4a242d5eb"} Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.033438 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.6584930770000001 podStartE2EDuration="2.033419143s" podCreationTimestamp="2025-11-23 07:12:19 +0000 UTC" firstStartedPulling="2025-11-23 07:12:19.942885464 +0000 UTC m=+1323.640290243" lastFinishedPulling="2025-11-23 07:12:20.3178115 +0000 UTC m=+1324.015216309" observedRunningTime="2025-11-23 07:12:21.02388889 +0000 UTC m=+1324.721293679" watchObservedRunningTime="2025-11-23 07:12:21.033419143 +0000 UTC m=+1324.730823922" Nov 23 07:12:21 crc kubenswrapper[5028]: I1123 07:12:21.063283 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41d6b075-2987-431a-b6d2-2842bc5726de" path="/var/lib/kubelet/pods/41d6b075-2987-431a-b6d2-2842bc5726de/volumes" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.036063 5028 generic.go:334] "Generic (PLEG): container finished" podID="604796de-b46b-48b6-a9ae-bebdecff4609" containerID="ff20eb6f31a8f3af5fe92a56df00232c3a186ab116ef1d068710d6eb07025526" exitCode=0 Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.037341 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerDied","Data":"ff20eb6f31a8f3af5fe92a56df00232c3a186ab116ef1d068710d6eb07025526"} Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.214277 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266163 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76wzf\" (UniqueName: \"kubernetes.io/projected/604796de-b46b-48b6-a9ae-bebdecff4609-kube-api-access-76wzf\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266265 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-run-httpd\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266303 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-config-data\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266330 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-sg-core-conf-yaml\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266479 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-log-httpd\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266513 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-combined-ca-bundle\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266549 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-scripts\") pod \"604796de-b46b-48b6-a9ae-bebdecff4609\" (UID: \"604796de-b46b-48b6-a9ae-bebdecff4609\") " Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.266910 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.267297 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.267402 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.273638 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604796de-b46b-48b6-a9ae-bebdecff4609-kube-api-access-76wzf" (OuterVolumeSpecName: "kube-api-access-76wzf") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "kube-api-access-76wzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.274286 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-scripts" (OuterVolumeSpecName: "scripts") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.314689 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.349942 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.369463 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604796de-b46b-48b6-a9ae-bebdecff4609-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.369507 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.369524 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.369538 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76wzf\" (UniqueName: \"kubernetes.io/projected/604796de-b46b-48b6-a9ae-bebdecff4609-kube-api-access-76wzf\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.369553 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.390256 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-config-data" (OuterVolumeSpecName: "config-data") pod "604796de-b46b-48b6-a9ae-bebdecff4609" (UID: "604796de-b46b-48b6-a9ae-bebdecff4609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:23 crc kubenswrapper[5028]: I1123 07:12:23.472088 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604796de-b46b-48b6-a9ae-bebdecff4609-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.048661 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"604796de-b46b-48b6-a9ae-bebdecff4609","Type":"ContainerDied","Data":"53188307914ce8ef0c8125dbee09f2fb2e9a10253714bf5847f2a68d3125ea54"} Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.048741 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.049131 5028 scope.go:117] "RemoveContainer" containerID="1cca987ba31c26d37c23b81a8ad83b55324ce618e4dde1dc1159757fdcd16c13" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.092155 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.107555 5028 scope.go:117] "RemoveContainer" containerID="b3a362a131b827c3ce2f0f339a320e607dd85b506ccd2e7765967071f8135415" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.118105 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.136117 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:24 crc kubenswrapper[5028]: E1123 07:12:24.137266 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-central-agent" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.137289 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-central-agent" Nov 23 07:12:24 crc kubenswrapper[5028]: E1123 07:12:24.137341 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="proxy-httpd" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.137355 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="proxy-httpd" Nov 23 07:12:24 crc kubenswrapper[5028]: E1123 07:12:24.137414 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="sg-core" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.137424 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="sg-core" Nov 23 07:12:24 crc kubenswrapper[5028]: E1123 07:12:24.137490 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-notification-agent" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.137506 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-notification-agent" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.138244 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="sg-core" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.138274 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-notification-agent" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.138305 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="proxy-httpd" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.138321 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" containerName="ceilometer-central-agent" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.143021 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.146222 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.147351 5028 scope.go:117] "RemoveContainer" containerID="ff20eb6f31a8f3af5fe92a56df00232c3a186ab116ef1d068710d6eb07025526" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.158048 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.158369 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.158566 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.184482 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.184630 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-log-httpd\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.184701 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-run-httpd\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.184755 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7hs\" (UniqueName: \"kubernetes.io/projected/ab980d75-12f2-4c94-8b31-aac88589fe35-kube-api-access-7j7hs\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.185701 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.185747 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.185796 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-scripts\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.186076 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-config-data\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.193458 5028 scope.go:117] "RemoveContainer" containerID="d801f00cd19fbf528af6b558d94371b98c462f082cf66d4bfd77a3e4a242d5eb" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.287563 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-log-httpd\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.287942 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-run-httpd\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288061 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7hs\" (UniqueName: \"kubernetes.io/projected/ab980d75-12f2-4c94-8b31-aac88589fe35-kube-api-access-7j7hs\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288102 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288125 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288154 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-scripts\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288257 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-config-data\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288293 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.288566 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-log-httpd\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.289863 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-run-httpd\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.293470 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-scripts\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.293737 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.294108 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.295779 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.300974 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-config-data\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.303252 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7hs\" (UniqueName: \"kubernetes.io/projected/ab980d75-12f2-4c94-8b31-aac88589fe35-kube-api-access-7j7hs\") pod \"ceilometer-0\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.485226 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:24 crc kubenswrapper[5028]: I1123 07:12:24.937446 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:25 crc kubenswrapper[5028]: I1123 07:12:25.064524 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604796de-b46b-48b6-a9ae-bebdecff4609" path="/var/lib/kubelet/pods/604796de-b46b-48b6-a9ae-bebdecff4609/volumes" Nov 23 07:12:25 crc kubenswrapper[5028]: I1123 07:12:25.065660 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerStarted","Data":"7bd4589173c4206c20e83f230c7751d4341095ebf1ac95fa0fb056cf8536757f"} Nov 23 07:12:25 crc kubenswrapper[5028]: I1123 07:12:25.435759 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:12:25 crc kubenswrapper[5028]: I1123 07:12:25.477048 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:12:25 crc kubenswrapper[5028]: I1123 07:12:25.623541 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:12:25 crc kubenswrapper[5028]: I1123 07:12:25.623645 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:12:26 crc kubenswrapper[5028]: I1123 07:12:26.072611 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerStarted","Data":"7ae6a212dc85dd5658f276d9604d52215998dfc0b9d6c64b0e581e0996f55f7b"} Nov 23 07:12:26 crc kubenswrapper[5028]: I1123 07:12:26.109711 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:12:26 crc kubenswrapper[5028]: I1123 07:12:26.707165 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:12:26 crc kubenswrapper[5028]: I1123 07:12:26.707188 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:12:27 crc kubenswrapper[5028]: I1123 07:12:27.084330 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerStarted","Data":"17a088ec5cafe1e6e6162320027677e0f1fe4412b117204165f0c1167857a7cc"} Nov 23 07:12:27 crc kubenswrapper[5028]: I1123 07:12:27.085364 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerStarted","Data":"534aa5780054a5eb17cd0f8ef0b6067764fd51e0205de0f27d880b90313dae9e"} Nov 23 07:12:29 crc kubenswrapper[5028]: I1123 07:12:29.105489 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerStarted","Data":"fa448aff4de343ead2c4abe1de22719e056f3c86eb5bd73425933dcb7846980b"} Nov 23 07:12:29 crc kubenswrapper[5028]: I1123 07:12:29.106119 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:12:29 crc kubenswrapper[5028]: I1123 07:12:29.123668 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.122491454 podStartE2EDuration="5.123647567s" podCreationTimestamp="2025-11-23 07:12:24 +0000 UTC" firstStartedPulling="2025-11-23 07:12:24.948026235 +0000 UTC m=+1328.645431014" lastFinishedPulling="2025-11-23 07:12:27.949182338 +0000 UTC m=+1331.646587127" observedRunningTime="2025-11-23 07:12:29.123420531 +0000 UTC m=+1332.820825320" watchObservedRunningTime="2025-11-23 07:12:29.123647567 +0000 UTC m=+1332.821052346" Nov 23 07:12:29 crc kubenswrapper[5028]: I1123 07:12:29.663335 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.141534 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.146623 5028 generic.go:334] "Generic (PLEG): container finished" podID="fe69df49-0822-44c8-b362-fdb7f0683f88" containerID="6f3ee00baa666ce4b0a62f5838f506fd3532211c0c7576d534a5dbad1491e360" exitCode=137 Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.146702 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe69df49-0822-44c8-b362-fdb7f0683f88","Type":"ContainerDied","Data":"6f3ee00baa666ce4b0a62f5838f506fd3532211c0c7576d534a5dbad1491e360"} Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.146729 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe69df49-0822-44c8-b362-fdb7f0683f88","Type":"ContainerDied","Data":"ea0cfd6a4ffdf1a3b9a3f1d9c9e6d783e5abee4545c8b004e34e0751572f3c69"} Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.146738 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0cfd6a4ffdf1a3b9a3f1d9c9e6d783e5abee4545c8b004e34e0751572f3c69" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.148518 5028 generic.go:334] "Generic (PLEG): container finished" podID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerID="00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2" exitCode=137 Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.148543 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.148558 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a12a485c-957d-4ae6-967b-c9243b7792e9","Type":"ContainerDied","Data":"00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2"} Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.148580 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a12a485c-957d-4ae6-967b-c9243b7792e9","Type":"ContainerDied","Data":"d8d64b4b218e8a4d184f4b45396276112019aeb33241012d3df6282f9034b6e9"} Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.148597 5028 scope.go:117] "RemoveContainer" containerID="00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.164368 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.182371 5028 scope.go:117] "RemoveContainer" containerID="baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.205901 5028 scope.go:117] "RemoveContainer" containerID="00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2" Nov 23 07:12:33 crc kubenswrapper[5028]: E1123 07:12:33.206368 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2\": container with ID starting with 00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2 not found: ID does not exist" containerID="00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.206418 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2"} err="failed to get container status \"00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2\": rpc error: code = NotFound desc = could not find container \"00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2\": container with ID starting with 00a1cb9526032195a86b3907f85db9c30c9cadcac5d781e610e5e14f4746a8f2 not found: ID does not exist" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.206447 5028 scope.go:117] "RemoveContainer" containerID="baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f" Nov 23 07:12:33 crc kubenswrapper[5028]: E1123 07:12:33.206898 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f\": container with ID starting with baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f not found: ID does not exist" containerID="baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.206930 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f"} err="failed to get container status \"baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f\": rpc error: code = NotFound desc = could not find container \"baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f\": container with ID starting with baf582f4dfdd9c28521e73bcef0124ede6942a40c8e049c6d758af880479858f not found: ID does not exist" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277254 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a12a485c-957d-4ae6-967b-c9243b7792e9-logs\") pod \"a12a485c-957d-4ae6-967b-c9243b7792e9\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277311 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-combined-ca-bundle\") pod \"a12a485c-957d-4ae6-967b-c9243b7792e9\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277437 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxnzt\" (UniqueName: \"kubernetes.io/projected/a12a485c-957d-4ae6-967b-c9243b7792e9-kube-api-access-dxnzt\") pod \"a12a485c-957d-4ae6-967b-c9243b7792e9\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277482 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffdmj\" (UniqueName: \"kubernetes.io/projected/fe69df49-0822-44c8-b362-fdb7f0683f88-kube-api-access-ffdmj\") pod \"fe69df49-0822-44c8-b362-fdb7f0683f88\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277581 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-config-data\") pod \"a12a485c-957d-4ae6-967b-c9243b7792e9\" (UID: \"a12a485c-957d-4ae6-967b-c9243b7792e9\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277688 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-combined-ca-bundle\") pod \"fe69df49-0822-44c8-b362-fdb7f0683f88\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.277732 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-config-data\") pod \"fe69df49-0822-44c8-b362-fdb7f0683f88\" (UID: \"fe69df49-0822-44c8-b362-fdb7f0683f88\") " Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.279771 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a12a485c-957d-4ae6-967b-c9243b7792e9-logs" (OuterVolumeSpecName: "logs") pod "a12a485c-957d-4ae6-967b-c9243b7792e9" (UID: "a12a485c-957d-4ae6-967b-c9243b7792e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.284549 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a12a485c-957d-4ae6-967b-c9243b7792e9-kube-api-access-dxnzt" (OuterVolumeSpecName: "kube-api-access-dxnzt") pod "a12a485c-957d-4ae6-967b-c9243b7792e9" (UID: "a12a485c-957d-4ae6-967b-c9243b7792e9"). InnerVolumeSpecName "kube-api-access-dxnzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.284786 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe69df49-0822-44c8-b362-fdb7f0683f88-kube-api-access-ffdmj" (OuterVolumeSpecName: "kube-api-access-ffdmj") pod "fe69df49-0822-44c8-b362-fdb7f0683f88" (UID: "fe69df49-0822-44c8-b362-fdb7f0683f88"). InnerVolumeSpecName "kube-api-access-ffdmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.306415 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-config-data" (OuterVolumeSpecName: "config-data") pod "fe69df49-0822-44c8-b362-fdb7f0683f88" (UID: "fe69df49-0822-44c8-b362-fdb7f0683f88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.307602 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a12a485c-957d-4ae6-967b-c9243b7792e9" (UID: "a12a485c-957d-4ae6-967b-c9243b7792e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.310151 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe69df49-0822-44c8-b362-fdb7f0683f88" (UID: "fe69df49-0822-44c8-b362-fdb7f0683f88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.311932 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-config-data" (OuterVolumeSpecName: "config-data") pod "a12a485c-957d-4ae6-967b-c9243b7792e9" (UID: "a12a485c-957d-4ae6-967b-c9243b7792e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380432 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380464 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe69df49-0822-44c8-b362-fdb7f0683f88-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380474 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a12a485c-957d-4ae6-967b-c9243b7792e9-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380483 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380492 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxnzt\" (UniqueName: \"kubernetes.io/projected/a12a485c-957d-4ae6-967b-c9243b7792e9-kube-api-access-dxnzt\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380502 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffdmj\" (UniqueName: \"kubernetes.io/projected/fe69df49-0822-44c8-b362-fdb7f0683f88-kube-api-access-ffdmj\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.380510 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a12a485c-957d-4ae6-967b-c9243b7792e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.488104 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.495235 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.517897 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:33 crc kubenswrapper[5028]: E1123 07:12:33.518403 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-log" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.518426 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-log" Nov 23 07:12:33 crc kubenswrapper[5028]: E1123 07:12:33.518452 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe69df49-0822-44c8-b362-fdb7f0683f88" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.518462 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe69df49-0822-44c8-b362-fdb7f0683f88" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:12:33 crc kubenswrapper[5028]: E1123 07:12:33.518472 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-metadata" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.518479 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-metadata" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.518811 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe69df49-0822-44c8-b362-fdb7f0683f88" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.518847 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-metadata" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.518867 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" containerName="nova-metadata-log" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.520103 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.523278 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.524478 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.526710 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.686519 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.686555 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff542a1-9fb0-40e9-8428-da161db08c91-logs\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.686610 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.686732 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-config-data\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.686840 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvm2\" (UniqueName: \"kubernetes.io/projected/aff542a1-9fb0-40e9-8428-da161db08c91-kube-api-access-xcvm2\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.788838 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvm2\" (UniqueName: \"kubernetes.io/projected/aff542a1-9fb0-40e9-8428-da161db08c91-kube-api-access-xcvm2\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.789001 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.789057 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff542a1-9fb0-40e9-8428-da161db08c91-logs\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.789118 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.789159 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-config-data\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.789656 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff542a1-9fb0-40e9-8428-da161db08c91-logs\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.792566 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.792676 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.793263 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-config-data\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.807345 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvm2\" (UniqueName: \"kubernetes.io/projected/aff542a1-9fb0-40e9-8428-da161db08c91-kube-api-access-xcvm2\") pod \"nova-metadata-0\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " pod="openstack/nova-metadata-0" Nov 23 07:12:33 crc kubenswrapper[5028]: I1123 07:12:33.842491 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.160224 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.200921 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.223841 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.239062 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.240423 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.243571 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.243888 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.245079 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.259078 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.273842 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.404465 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.404842 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.404888 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.404990 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.405021 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg2hj\" (UniqueName: \"kubernetes.io/projected/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-kube-api-access-wg2hj\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.506878 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.506938 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.507017 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.507042 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg2hj\" (UniqueName: \"kubernetes.io/projected/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-kube-api-access-wg2hj\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.507119 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.516840 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.517589 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.518353 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.520796 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.523101 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg2hj\" (UniqueName: \"kubernetes.io/projected/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-kube-api-access-wg2hj\") pod \"nova-cell1-novncproxy-0\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.561930 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:34 crc kubenswrapper[5028]: I1123 07:12:34.983318 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:12:34 crc kubenswrapper[5028]: W1123 07:12:34.991775 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa89bc7_a850_429e_a0ef_c5f2906b0d18.slice/crio-a84b8052b0b2fd22ad2f213f0fec236e8ff2e66a7d39efb2561c980961b98dc0 WatchSource:0}: Error finding container a84b8052b0b2fd22ad2f213f0fec236e8ff2e66a7d39efb2561c980961b98dc0: Status 404 returned error can't find the container with id a84b8052b0b2fd22ad2f213f0fec236e8ff2e66a7d39efb2561c980961b98dc0 Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.067783 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a12a485c-957d-4ae6-967b-c9243b7792e9" path="/var/lib/kubelet/pods/a12a485c-957d-4ae6-967b-c9243b7792e9/volumes" Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.068534 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe69df49-0822-44c8-b362-fdb7f0683f88" path="/var/lib/kubelet/pods/fe69df49-0822-44c8-b362-fdb7f0683f88/volumes" Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.170315 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aff542a1-9fb0-40e9-8428-da161db08c91","Type":"ContainerStarted","Data":"217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e"} Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.170749 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aff542a1-9fb0-40e9-8428-da161db08c91","Type":"ContainerStarted","Data":"89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151"} Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.170767 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aff542a1-9fb0-40e9-8428-da161db08c91","Type":"ContainerStarted","Data":"e196c30d63f4229afeee64724b4d0f844fb16073843371fe55a87d4af3b5cbcc"} Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.173324 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eaa89bc7-a850-429e-a0ef-c5f2906b0d18","Type":"ContainerStarted","Data":"a84b8052b0b2fd22ad2f213f0fec236e8ff2e66a7d39efb2561c980961b98dc0"} Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.197554 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.1975365829999998 podStartE2EDuration="2.197536583s" podCreationTimestamp="2025-11-23 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:35.189547188 +0000 UTC m=+1338.886951967" watchObservedRunningTime="2025-11-23 07:12:35.197536583 +0000 UTC m=+1338.894941362" Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.627496 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.627980 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.628296 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:12:35 crc kubenswrapper[5028]: I1123 07:12:35.630965 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.184698 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eaa89bc7-a850-429e-a0ef-c5f2906b0d18","Type":"ContainerStarted","Data":"26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a"} Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.185217 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.189447 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.205260 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.205233406 podStartE2EDuration="2.205233406s" podCreationTimestamp="2025-11-23 07:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:36.202424657 +0000 UTC m=+1339.899829426" watchObservedRunningTime="2025-11-23 07:12:36.205233406 +0000 UTC m=+1339.902638195" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.404829 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55bfb77665-zk585"] Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.406328 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.425167 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55bfb77665-zk585"] Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.548319 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-swift-storage-0\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.548397 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwg29\" (UniqueName: \"kubernetes.io/projected/02a942e1-e2f6-45ea-829d-70d45cca4860-kube-api-access-hwg29\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.548475 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-config\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.548516 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-sb\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.548595 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-svc\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.548657 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-nb\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.649795 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-sb\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.649869 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-svc\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.649911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-nb\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.650002 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-swift-storage-0\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.650037 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwg29\" (UniqueName: \"kubernetes.io/projected/02a942e1-e2f6-45ea-829d-70d45cca4860-kube-api-access-hwg29\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.650097 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-config\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.650912 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-sb\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.651001 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-svc\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.651708 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-nb\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.651723 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-swift-storage-0\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.651718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-config\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.676049 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwg29\" (UniqueName: \"kubernetes.io/projected/02a942e1-e2f6-45ea-829d-70d45cca4860-kube-api-access-hwg29\") pod \"dnsmasq-dns-55bfb77665-zk585\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:36 crc kubenswrapper[5028]: I1123 07:12:36.757259 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:37 crc kubenswrapper[5028]: I1123 07:12:37.234730 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55bfb77665-zk585"] Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.202102 5028 generic.go:334] "Generic (PLEG): container finished" podID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerID="dbdc0da81bb53c52de0946232513f986627786cc13a4935b407621df5a7225be" exitCode=0 Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.202165 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55bfb77665-zk585" event={"ID":"02a942e1-e2f6-45ea-829d-70d45cca4860","Type":"ContainerDied","Data":"dbdc0da81bb53c52de0946232513f986627786cc13a4935b407621df5a7225be"} Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.202532 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55bfb77665-zk585" event={"ID":"02a942e1-e2f6-45ea-829d-70d45cca4860","Type":"ContainerStarted","Data":"1753c746513cb1c59737ba7752ab926efd75e39020faec8204523e61848a46e7"} Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.364546 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.365213 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-central-agent" containerID="cri-o://7ae6a212dc85dd5658f276d9604d52215998dfc0b9d6c64b0e581e0996f55f7b" gracePeriod=30 Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.365367 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="proxy-httpd" containerID="cri-o://fa448aff4de343ead2c4abe1de22719e056f3c86eb5bd73425933dcb7846980b" gracePeriod=30 Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.365412 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="sg-core" containerID="cri-o://17a088ec5cafe1e6e6162320027677e0f1fe4412b117204165f0c1167857a7cc" gracePeriod=30 Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.365454 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-notification-agent" containerID="cri-o://534aa5780054a5eb17cd0f8ef0b6067764fd51e0205de0f27d880b90313dae9e" gracePeriod=30 Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.381789 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.193:3000/\": EOF" Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.809830 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.842658 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:12:38 crc kubenswrapper[5028]: I1123 07:12:38.842702 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.212406 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55bfb77665-zk585" event={"ID":"02a942e1-e2f6-45ea-829d-70d45cca4860","Type":"ContainerStarted","Data":"3f849cfb8e74b4980c92c33ab679fd9a8d36c82079aa4ad05015977a5e743ef6"} Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.212751 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.215666 5028 generic.go:334] "Generic (PLEG): container finished" podID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerID="fa448aff4de343ead2c4abe1de22719e056f3c86eb5bd73425933dcb7846980b" exitCode=0 Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.215815 5028 generic.go:334] "Generic (PLEG): container finished" podID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerID="17a088ec5cafe1e6e6162320027677e0f1fe4412b117204165f0c1167857a7cc" exitCode=2 Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.215876 5028 generic.go:334] "Generic (PLEG): container finished" podID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerID="7ae6a212dc85dd5658f276d9604d52215998dfc0b9d6c64b0e581e0996f55f7b" exitCode=0 Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.215737 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerDied","Data":"fa448aff4de343ead2c4abe1de22719e056f3c86eb5bd73425933dcb7846980b"} Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.216077 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerDied","Data":"17a088ec5cafe1e6e6162320027677e0f1fe4412b117204165f0c1167857a7cc"} Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.216091 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerDied","Data":"7ae6a212dc85dd5658f276d9604d52215998dfc0b9d6c64b0e581e0996f55f7b"} Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.216257 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-log" containerID="cri-o://8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3" gracePeriod=30 Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.216304 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-api" containerID="cri-o://8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35" gracePeriod=30 Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.238783 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55bfb77665-zk585" podStartSLOduration=3.238764941 podStartE2EDuration="3.238764941s" podCreationTimestamp="2025-11-23 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:39.236053055 +0000 UTC m=+1342.933457844" watchObservedRunningTime="2025-11-23 07:12:39.238764941 +0000 UTC m=+1342.936169720" Nov 23 07:12:39 crc kubenswrapper[5028]: I1123 07:12:39.562177 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:40 crc kubenswrapper[5028]: I1123 07:12:40.225796 5028 generic.go:334] "Generic (PLEG): container finished" podID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerID="8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3" exitCode=143 Nov 23 07:12:40 crc kubenswrapper[5028]: I1123 07:12:40.225891 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b16d3c3b-327c-4565-9e13-ad8ff67f0a52","Type":"ContainerDied","Data":"8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3"} Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.262118 5028 generic.go:334] "Generic (PLEG): container finished" podID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerID="534aa5780054a5eb17cd0f8ef0b6067764fd51e0205de0f27d880b90313dae9e" exitCode=0 Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.262157 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerDied","Data":"534aa5780054a5eb17cd0f8ef0b6067764fd51e0205de0f27d880b90313dae9e"} Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.544464 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.599630 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-config-data\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.599743 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-sg-core-conf-yaml\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.599797 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j7hs\" (UniqueName: \"kubernetes.io/projected/ab980d75-12f2-4c94-8b31-aac88589fe35-kube-api-access-7j7hs\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.599886 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-ceilometer-tls-certs\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.599904 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-scripts\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.600029 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-combined-ca-bundle\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.600144 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-log-httpd\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.600183 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-run-httpd\") pod \"ab980d75-12f2-4c94-8b31-aac88589fe35\" (UID: \"ab980d75-12f2-4c94-8b31-aac88589fe35\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.600975 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.601421 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.606259 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-scripts" (OuterVolumeSpecName: "scripts") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.606652 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab980d75-12f2-4c94-8b31-aac88589fe35-kube-api-access-7j7hs" (OuterVolumeSpecName: "kube-api-access-7j7hs") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "kube-api-access-7j7hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.644671 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.670523 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.701115 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702364 5028 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702383 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702392 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702400 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702408 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab980d75-12f2-4c94-8b31-aac88589fe35-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702416 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.702424 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j7hs\" (UniqueName: \"kubernetes.io/projected/ab980d75-12f2-4c94-8b31-aac88589fe35-kube-api-access-7j7hs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.735489 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.768439 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-config-data" (OuterVolumeSpecName: "config-data") pod "ab980d75-12f2-4c94-8b31-aac88589fe35" (UID: "ab980d75-12f2-4c94-8b31-aac88589fe35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.803768 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-logs\") pod \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.804000 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tp5q\" (UniqueName: \"kubernetes.io/projected/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-kube-api-access-2tp5q\") pod \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.804024 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-config-data\") pod \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.804134 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-combined-ca-bundle\") pod \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\" (UID: \"b16d3c3b-327c-4565-9e13-ad8ff67f0a52\") " Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.804583 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab980d75-12f2-4c94-8b31-aac88589fe35-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.805570 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-logs" (OuterVolumeSpecName: "logs") pod "b16d3c3b-327c-4565-9e13-ad8ff67f0a52" (UID: "b16d3c3b-327c-4565-9e13-ad8ff67f0a52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.811458 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-kube-api-access-2tp5q" (OuterVolumeSpecName: "kube-api-access-2tp5q") pod "b16d3c3b-327c-4565-9e13-ad8ff67f0a52" (UID: "b16d3c3b-327c-4565-9e13-ad8ff67f0a52"). InnerVolumeSpecName "kube-api-access-2tp5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.831081 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b16d3c3b-327c-4565-9e13-ad8ff67f0a52" (UID: "b16d3c3b-327c-4565-9e13-ad8ff67f0a52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.844041 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-config-data" (OuterVolumeSpecName: "config-data") pod "b16d3c3b-327c-4565-9e13-ad8ff67f0a52" (UID: "b16d3c3b-327c-4565-9e13-ad8ff67f0a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.906098 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tp5q\" (UniqueName: \"kubernetes.io/projected/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-kube-api-access-2tp5q\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.906133 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.906144 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:42 crc kubenswrapper[5028]: I1123 07:12:42.906151 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b16d3c3b-327c-4565-9e13-ad8ff67f0a52-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.272994 5028 generic.go:334] "Generic (PLEG): container finished" podID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerID="8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35" exitCode=0 Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.273023 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.273040 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b16d3c3b-327c-4565-9e13-ad8ff67f0a52","Type":"ContainerDied","Data":"8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35"} Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.273519 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b16d3c3b-327c-4565-9e13-ad8ff67f0a52","Type":"ContainerDied","Data":"2b300ac8a35dcdcd558dcceca6be5ecbeef0f28541c2bc99e99a7130ebcf8cd9"} Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.273552 5028 scope.go:117] "RemoveContainer" containerID="8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.277964 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab980d75-12f2-4c94-8b31-aac88589fe35","Type":"ContainerDied","Data":"7bd4589173c4206c20e83f230c7751d4341095ebf1ac95fa0fb056cf8536757f"} Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.278077 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.296907 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.298050 5028 scope.go:117] "RemoveContainer" containerID="8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.304364 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.325398 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.329779 5028 scope.go:117] "RemoveContainer" containerID="8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.330137 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35\": container with ID starting with 8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35 not found: ID does not exist" containerID="8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.330168 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35"} err="failed to get container status \"8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35\": rpc error: code = NotFound desc = could not find container \"8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35\": container with ID starting with 8fe39813c3772ee874b10c1e317fe93ed9682dc4e7c8a03a6265827ba944ce35 not found: ID does not exist" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.330190 5028 scope.go:117] "RemoveContainer" containerID="8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.330398 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3\": container with ID starting with 8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3 not found: ID does not exist" containerID="8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.330420 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3"} err="failed to get container status \"8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3\": rpc error: code = NotFound desc = could not find container \"8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3\": container with ID starting with 8118e394006c4cee306cbb186f891c58fcc75b011ce98d218a20032d46c901e3 not found: ID does not exist" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.330434 5028 scope.go:117] "RemoveContainer" containerID="fa448aff4de343ead2c4abe1de22719e056f3c86eb5bd73425933dcb7846980b" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.344740 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357050 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.357592 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="proxy-httpd" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357619 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="proxy-httpd" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.357643 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-central-agent" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357652 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-central-agent" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.357667 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="sg-core" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357678 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="sg-core" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.357689 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-notification-agent" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357696 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-notification-agent" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.357717 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-log" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357725 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-log" Nov 23 07:12:43 crc kubenswrapper[5028]: E1123 07:12:43.357741 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-api" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.357749 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-api" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.358740 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="sg-core" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.358768 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-notification-agent" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.358781 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="ceilometer-central-agent" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.358811 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" containerName="proxy-httpd" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.358823 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-log" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.358838 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" containerName="nova-api-api" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.360356 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.367532 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.367831 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.367908 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.368303 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.370598 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.385748 5028 scope.go:117] "RemoveContainer" containerID="17a088ec5cafe1e6e6162320027677e0f1fe4412b117204165f0c1167857a7cc" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.385877 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.386086 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.386613 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.386775 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.395519 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.410063 5028 scope.go:117] "RemoveContainer" containerID="534aa5780054a5eb17cd0f8ef0b6067764fd51e0205de0f27d880b90313dae9e" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416113 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kmqn\" (UniqueName: \"kubernetes.io/projected/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-kube-api-access-5kmqn\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416177 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416235 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-scripts\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416254 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416277 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-config-data\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416306 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416321 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-run-httpd\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416343 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-log-httpd\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416373 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-logs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416396 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416423 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416470 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416495 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzjn\" (UniqueName: \"kubernetes.io/projected/9968b58c-c1a8-4491-b918-2c1cd8f56695-kube-api-access-dvzjn\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.416622 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-config-data\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.431266 5028 scope.go:117] "RemoveContainer" containerID="7ae6a212dc85dd5658f276d9604d52215998dfc0b9d6c64b0e581e0996f55f7b" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518578 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518629 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518675 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518697 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzjn\" (UniqueName: \"kubernetes.io/projected/9968b58c-c1a8-4491-b918-2c1cd8f56695-kube-api-access-dvzjn\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518721 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-config-data\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518744 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kmqn\" (UniqueName: \"kubernetes.io/projected/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-kube-api-access-5kmqn\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518770 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-scripts\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518826 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518848 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-config-data\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518876 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518892 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-run-httpd\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-log-httpd\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.518960 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-logs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.519352 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-logs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.519644 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-run-httpd\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.519872 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-log-httpd\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.522852 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.523249 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-config-data\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.523528 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.523941 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-config-data\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.531459 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-scripts\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.532215 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.532707 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.533408 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.535260 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.536131 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kmqn\" (UniqueName: \"kubernetes.io/projected/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-kube-api-access-5kmqn\") pod \"nova-api-0\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.536140 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzjn\" (UniqueName: \"kubernetes.io/projected/9968b58c-c1a8-4491-b918-2c1cd8f56695-kube-api-access-dvzjn\") pod \"ceilometer-0\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.696184 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.703067 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.843859 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:12:43 crc kubenswrapper[5028]: I1123 07:12:43.844251 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.238314 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.287265 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.288317 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerStarted","Data":"906ebed0ef1eb9b7876f5cb2260949844121f6efc53c048a29009fc39e4bb7db"} Nov 23 07:12:44 crc kubenswrapper[5028]: W1123 07:12:44.290258 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d08dec6_ece4_4e8c_9780_3f6efc4b6db9.slice/crio-3044f257b833b6f56733b5fad82cca444962f958f6adf5907d760ec0ebfbada3 WatchSource:0}: Error finding container 3044f257b833b6f56733b5fad82cca444962f958f6adf5907d760ec0ebfbada3: Status 404 returned error can't find the container with id 3044f257b833b6f56733b5fad82cca444962f958f6adf5907d760ec0ebfbada3 Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.562917 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.589863 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.859113 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:12:44 crc kubenswrapper[5028]: I1123 07:12:44.859138 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.062429 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab980d75-12f2-4c94-8b31-aac88589fe35" path="/var/lib/kubelet/pods/ab980d75-12f2-4c94-8b31-aac88589fe35/volumes" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.063180 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16d3c3b-327c-4565-9e13-ad8ff67f0a52" path="/var/lib/kubelet/pods/b16d3c3b-327c-4565-9e13-ad8ff67f0a52/volumes" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.319422 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9","Type":"ContainerStarted","Data":"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67"} Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.319706 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9","Type":"ContainerStarted","Data":"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159"} Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.319716 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9","Type":"ContainerStarted","Data":"3044f257b833b6f56733b5fad82cca444962f958f6adf5907d760ec0ebfbada3"} Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.322731 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerStarted","Data":"dde5c69666e70bb8f75a7ed7a789c3ae7acdc1f1acf508ccc160a39ef016782c"} Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.336884 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.336869479 podStartE2EDuration="2.336869479s" podCreationTimestamp="2025-11-23 07:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:45.335370862 +0000 UTC m=+1349.032775641" watchObservedRunningTime="2025-11-23 07:12:45.336869479 +0000 UTC m=+1349.034274258" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.343401 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.498739 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vj4cx"] Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.499968 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.502859 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.503075 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.513274 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vj4cx"] Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.666640 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.668143 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-scripts\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.668254 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjbg\" (UniqueName: \"kubernetes.io/projected/e034942e-8d84-42d0-9515-6005be425e0d-kube-api-access-fvjbg\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.668284 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-config-data\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.770681 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-scripts\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.770777 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjbg\" (UniqueName: \"kubernetes.io/projected/e034942e-8d84-42d0-9515-6005be425e0d-kube-api-access-fvjbg\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.770801 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-config-data\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.770834 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.777694 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.780331 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-scripts\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.780632 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-config-data\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.792250 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjbg\" (UniqueName: \"kubernetes.io/projected/e034942e-8d84-42d0-9515-6005be425e0d-kube-api-access-fvjbg\") pod \"nova-cell1-cell-mapping-vj4cx\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:45 crc kubenswrapper[5028]: I1123 07:12:45.821239 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.303042 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vj4cx"] Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.336001 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerStarted","Data":"fb676e1b2e2c02ba89cd38ce4f10b302e56efd80c670bac63573599363bc4fb7"} Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.337054 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerStarted","Data":"18362900b5e9f33833027654ded3247f26684450ed496934014046dcf552169b"} Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.338358 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vj4cx" event={"ID":"e034942e-8d84-42d0-9515-6005be425e0d","Type":"ContainerStarted","Data":"4a7d1c1761bb379e477936d209fc6993989dc7977bfb8f2570b05e1787c5f96e"} Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.758104 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.842693 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64dbf5859c-gm6vb"] Nov 23 07:12:46 crc kubenswrapper[5028]: I1123 07:12:46.843156 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerName="dnsmasq-dns" containerID="cri-o://d347353db33af1d6ba4d84c69e24716a8023002479762dc4dc845b00b4cdd85d" gracePeriod=10 Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.388367 5028 generic.go:334] "Generic (PLEG): container finished" podID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerID="d347353db33af1d6ba4d84c69e24716a8023002479762dc4dc845b00b4cdd85d" exitCode=0 Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.388681 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" event={"ID":"83c035a0-1f60-4649-bace-86aa5ee413ce","Type":"ContainerDied","Data":"d347353db33af1d6ba4d84c69e24716a8023002479762dc4dc845b00b4cdd85d"} Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.388709 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" event={"ID":"83c035a0-1f60-4649-bace-86aa5ee413ce","Type":"ContainerDied","Data":"c719bbec86bdbf319cbd3a617102b5a293b22a27f6046d3868e2630b80f305dc"} Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.388754 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c719bbec86bdbf319cbd3a617102b5a293b22a27f6046d3868e2630b80f305dc" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.390163 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vj4cx" event={"ID":"e034942e-8d84-42d0-9515-6005be425e0d","Type":"ContainerStarted","Data":"c99dbc45954ad6ec68de64bda24d6319695ecb2a85091d9874ecea7b556a7023"} Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.408681 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vj4cx" podStartSLOduration=2.408665963 podStartE2EDuration="2.408665963s" podCreationTimestamp="2025-11-23 07:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:47.406412628 +0000 UTC m=+1351.103817407" watchObservedRunningTime="2025-11-23 07:12:47.408665963 +0000 UTC m=+1351.106070742" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.465245 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.622563 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-sb\") pod \"83c035a0-1f60-4649-bace-86aa5ee413ce\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.622673 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-svc\") pod \"83c035a0-1f60-4649-bace-86aa5ee413ce\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.622728 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-config\") pod \"83c035a0-1f60-4649-bace-86aa5ee413ce\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.622760 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69rjp\" (UniqueName: \"kubernetes.io/projected/83c035a0-1f60-4649-bace-86aa5ee413ce-kube-api-access-69rjp\") pod \"83c035a0-1f60-4649-bace-86aa5ee413ce\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.622810 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-swift-storage-0\") pod \"83c035a0-1f60-4649-bace-86aa5ee413ce\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.622959 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-nb\") pod \"83c035a0-1f60-4649-bace-86aa5ee413ce\" (UID: \"83c035a0-1f60-4649-bace-86aa5ee413ce\") " Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.628673 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c035a0-1f60-4649-bace-86aa5ee413ce-kube-api-access-69rjp" (OuterVolumeSpecName: "kube-api-access-69rjp") pod "83c035a0-1f60-4649-bace-86aa5ee413ce" (UID: "83c035a0-1f60-4649-bace-86aa5ee413ce"). InnerVolumeSpecName "kube-api-access-69rjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.678426 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83c035a0-1f60-4649-bace-86aa5ee413ce" (UID: "83c035a0-1f60-4649-bace-86aa5ee413ce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.679645 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-config" (OuterVolumeSpecName: "config") pod "83c035a0-1f60-4649-bace-86aa5ee413ce" (UID: "83c035a0-1f60-4649-bace-86aa5ee413ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.680719 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83c035a0-1f60-4649-bace-86aa5ee413ce" (UID: "83c035a0-1f60-4649-bace-86aa5ee413ce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.697287 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83c035a0-1f60-4649-bace-86aa5ee413ce" (UID: "83c035a0-1f60-4649-bace-86aa5ee413ce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.705379 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "83c035a0-1f60-4649-bace-86aa5ee413ce" (UID: "83c035a0-1f60-4649-bace-86aa5ee413ce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.725193 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69rjp\" (UniqueName: \"kubernetes.io/projected/83c035a0-1f60-4649-bace-86aa5ee413ce-kube-api-access-69rjp\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.725237 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.725249 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.725261 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.725274 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:47 crc kubenswrapper[5028]: I1123 07:12:47.725285 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c035a0-1f60-4649-bace-86aa5ee413ce-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:48 crc kubenswrapper[5028]: I1123 07:12:48.400571 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerStarted","Data":"66ffc82e2a92f52c482cb6d579b807e3af34155790eb820d2d16eecf464cd86d"} Nov 23 07:12:48 crc kubenswrapper[5028]: I1123 07:12:48.401255 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 07:12:48 crc kubenswrapper[5028]: I1123 07:12:48.401377 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64dbf5859c-gm6vb" Nov 23 07:12:48 crc kubenswrapper[5028]: I1123 07:12:48.429770 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.40850081 podStartE2EDuration="5.429743053s" podCreationTimestamp="2025-11-23 07:12:43 +0000 UTC" firstStartedPulling="2025-11-23 07:12:44.226343372 +0000 UTC m=+1347.923748151" lastFinishedPulling="2025-11-23 07:12:47.247585615 +0000 UTC m=+1350.944990394" observedRunningTime="2025-11-23 07:12:48.428907793 +0000 UTC m=+1352.126312562" watchObservedRunningTime="2025-11-23 07:12:48.429743053 +0000 UTC m=+1352.127147832" Nov 23 07:12:48 crc kubenswrapper[5028]: I1123 07:12:48.450445 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64dbf5859c-gm6vb"] Nov 23 07:12:48 crc kubenswrapper[5028]: I1123 07:12:48.458178 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64dbf5859c-gm6vb"] Nov 23 07:12:49 crc kubenswrapper[5028]: I1123 07:12:49.063584 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" path="/var/lib/kubelet/pods/83c035a0-1f60-4649-bace-86aa5ee413ce/volumes" Nov 23 07:12:51 crc kubenswrapper[5028]: I1123 07:12:51.439730 5028 generic.go:334] "Generic (PLEG): container finished" podID="e034942e-8d84-42d0-9515-6005be425e0d" containerID="c99dbc45954ad6ec68de64bda24d6319695ecb2a85091d9874ecea7b556a7023" exitCode=0 Nov 23 07:12:51 crc kubenswrapper[5028]: I1123 07:12:51.439794 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vj4cx" event={"ID":"e034942e-8d84-42d0-9515-6005be425e0d","Type":"ContainerDied","Data":"c99dbc45954ad6ec68de64bda24d6319695ecb2a85091d9874ecea7b556a7023"} Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.814930 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.930920 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-config-data\") pod \"e034942e-8d84-42d0-9515-6005be425e0d\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.931154 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-scripts\") pod \"e034942e-8d84-42d0-9515-6005be425e0d\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.931206 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-combined-ca-bundle\") pod \"e034942e-8d84-42d0-9515-6005be425e0d\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.931871 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvjbg\" (UniqueName: \"kubernetes.io/projected/e034942e-8d84-42d0-9515-6005be425e0d-kube-api-access-fvjbg\") pod \"e034942e-8d84-42d0-9515-6005be425e0d\" (UID: \"e034942e-8d84-42d0-9515-6005be425e0d\") " Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.937419 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-scripts" (OuterVolumeSpecName: "scripts") pod "e034942e-8d84-42d0-9515-6005be425e0d" (UID: "e034942e-8d84-42d0-9515-6005be425e0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.937455 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e034942e-8d84-42d0-9515-6005be425e0d-kube-api-access-fvjbg" (OuterVolumeSpecName: "kube-api-access-fvjbg") pod "e034942e-8d84-42d0-9515-6005be425e0d" (UID: "e034942e-8d84-42d0-9515-6005be425e0d"). InnerVolumeSpecName "kube-api-access-fvjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.957535 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-config-data" (OuterVolumeSpecName: "config-data") pod "e034942e-8d84-42d0-9515-6005be425e0d" (UID: "e034942e-8d84-42d0-9515-6005be425e0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:52 crc kubenswrapper[5028]: I1123 07:12:52.963755 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e034942e-8d84-42d0-9515-6005be425e0d" (UID: "e034942e-8d84-42d0-9515-6005be425e0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.033566 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.033600 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.033611 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvjbg\" (UniqueName: \"kubernetes.io/projected/e034942e-8d84-42d0-9515-6005be425e0d-kube-api-access-fvjbg\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.033620 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e034942e-8d84-42d0-9515-6005be425e0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.464564 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vj4cx" event={"ID":"e034942e-8d84-42d0-9515-6005be425e0d","Type":"ContainerDied","Data":"4a7d1c1761bb379e477936d209fc6993989dc7977bfb8f2570b05e1787c5f96e"} Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.464930 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7d1c1761bb379e477936d209fc6993989dc7977bfb8f2570b05e1787c5f96e" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.464623 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vj4cx" Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.630834 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.631124 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-log" containerID="cri-o://35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159" gracePeriod=30 Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.631288 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-api" containerID="cri-o://c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67" gracePeriod=30 Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.646848 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.647156 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" containerName="nova-scheduler-scheduler" containerID="cri-o://9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" gracePeriod=30 Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.670585 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.670866 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-log" containerID="cri-o://89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151" gracePeriod=30 Nov 23 07:12:53 crc kubenswrapper[5028]: I1123 07:12:53.670942 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-metadata" containerID="cri-o://217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e" gracePeriod=30 Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.273699 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.386297 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-internal-tls-certs\") pod \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.386639 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-config-data\") pod \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.386690 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kmqn\" (UniqueName: \"kubernetes.io/projected/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-kube-api-access-5kmqn\") pod \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.386724 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-combined-ca-bundle\") pod \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.386745 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-public-tls-certs\") pod \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.386905 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-logs\") pod \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\" (UID: \"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9\") " Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.387302 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-logs" (OuterVolumeSpecName: "logs") pod "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" (UID: "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.387474 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.398157 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-kube-api-access-5kmqn" (OuterVolumeSpecName: "kube-api-access-5kmqn") pod "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" (UID: "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9"). InnerVolumeSpecName "kube-api-access-5kmqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.423352 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" (UID: "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.423561 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-config-data" (OuterVolumeSpecName: "config-data") pod "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" (UID: "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.443554 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" (UID: "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.448382 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" (UID: "4d08dec6-ece4-4e8c-9780-3f6efc4b6db9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.476096 5028 generic.go:334] "Generic (PLEG): container finished" podID="aff542a1-9fb0-40e9-8428-da161db08c91" containerID="89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151" exitCode=143 Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.476164 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aff542a1-9fb0-40e9-8428-da161db08c91","Type":"ContainerDied","Data":"89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151"} Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478530 5028 generic.go:334] "Generic (PLEG): container finished" podID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerID="c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67" exitCode=0 Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478555 5028 generic.go:334] "Generic (PLEG): container finished" podID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerID="35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159" exitCode=143 Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478572 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9","Type":"ContainerDied","Data":"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67"} Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478593 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9","Type":"ContainerDied","Data":"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159"} Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478604 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4d08dec6-ece4-4e8c-9780-3f6efc4b6db9","Type":"ContainerDied","Data":"3044f257b833b6f56733b5fad82cca444962f958f6adf5907d760ec0ebfbada3"} Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478620 5028 scope.go:117] "RemoveContainer" containerID="c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.478628 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.489027 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.489241 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.489310 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kmqn\" (UniqueName: \"kubernetes.io/projected/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-kube-api-access-5kmqn\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.489371 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.489432 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.503705 5028 scope.go:117] "RemoveContainer" containerID="35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.518472 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.536926 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.539115 5028 scope.go:117] "RemoveContainer" containerID="c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67" Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.539790 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67\": container with ID starting with c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67 not found: ID does not exist" containerID="c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.539831 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67"} err="failed to get container status \"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67\": rpc error: code = NotFound desc = could not find container \"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67\": container with ID starting with c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67 not found: ID does not exist" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.539856 5028 scope.go:117] "RemoveContainer" containerID="35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159" Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.540224 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159\": container with ID starting with 35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159 not found: ID does not exist" containerID="35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.540254 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159"} err="failed to get container status \"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159\": rpc error: code = NotFound desc = could not find container \"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159\": container with ID starting with 35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159 not found: ID does not exist" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.540271 5028 scope.go:117] "RemoveContainer" containerID="c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.540655 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67"} err="failed to get container status \"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67\": rpc error: code = NotFound desc = could not find container \"c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67\": container with ID starting with c001fa5128ac87b46eda74e9110c7f7514f948fb48dc11088388eb39f8cc1d67 not found: ID does not exist" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.540717 5028 scope.go:117] "RemoveContainer" containerID="35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.541135 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159"} err="failed to get container status \"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159\": rpc error: code = NotFound desc = could not find container \"35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159\": container with ID starting with 35eadbd665aff16e798ed033c17558e41772c38c24c4636d0ce14f8b793ab159 not found: ID does not exist" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.561657 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.562177 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-log" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562195 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-log" Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.562216 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerName="dnsmasq-dns" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562223 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerName="dnsmasq-dns" Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.562231 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-api" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562240 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-api" Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.562261 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034942e-8d84-42d0-9515-6005be425e0d" containerName="nova-manage" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562270 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034942e-8d84-42d0-9515-6005be425e0d" containerName="nova-manage" Nov 23 07:12:54 crc kubenswrapper[5028]: E1123 07:12:54.562299 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerName="init" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562306 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerName="init" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562546 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c035a0-1f60-4649-bace-86aa5ee413ce" containerName="dnsmasq-dns" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562564 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e034942e-8d84-42d0-9515-6005be425e0d" containerName="nova-manage" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562575 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-api" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.562592 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" containerName="nova-api-log" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.563812 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.567015 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.568067 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.568350 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.571135 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.696789 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.697180 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nphlw\" (UniqueName: \"kubernetes.io/projected/3a92efa4-12b6-431d-8aa7-9baa545f7e07-kube-api-access-nphlw\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.697314 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-public-tls-certs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.697417 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-config-data\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.697502 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.697629 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a92efa4-12b6-431d-8aa7-9baa545f7e07-logs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.799804 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-config-data\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.799879 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.799971 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a92efa4-12b6-431d-8aa7-9baa545f7e07-logs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.800014 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.800159 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nphlw\" (UniqueName: \"kubernetes.io/projected/3a92efa4-12b6-431d-8aa7-9baa545f7e07-kube-api-access-nphlw\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.800207 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-public-tls-certs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.801275 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a92efa4-12b6-431d-8aa7-9baa545f7e07-logs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.803737 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-config-data\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.803834 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.804623 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-public-tls-certs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.805106 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.819703 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nphlw\" (UniqueName: \"kubernetes.io/projected/3a92efa4-12b6-431d-8aa7-9baa545f7e07-kube-api-access-nphlw\") pod \"nova-api-0\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " pod="openstack/nova-api-0" Nov 23 07:12:54 crc kubenswrapper[5028]: I1123 07:12:54.880422 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:12:55 crc kubenswrapper[5028]: I1123 07:12:55.065354 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d08dec6-ece4-4e8c-9780-3f6efc4b6db9" path="/var/lib/kubelet/pods/4d08dec6-ece4-4e8c-9780-3f6efc4b6db9/volumes" Nov 23 07:12:55 crc kubenswrapper[5028]: I1123 07:12:55.366106 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:12:55 crc kubenswrapper[5028]: W1123 07:12:55.371629 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a92efa4_12b6_431d_8aa7_9baa545f7e07.slice/crio-99a6577748e15bd2bece4c27f21d4ea6a0643f13d49a8c18aad3363986e0a1f8 WatchSource:0}: Error finding container 99a6577748e15bd2bece4c27f21d4ea6a0643f13d49a8c18aad3363986e0a1f8: Status 404 returned error can't find the container with id 99a6577748e15bd2bece4c27f21d4ea6a0643f13d49a8c18aad3363986e0a1f8 Nov 23 07:12:55 crc kubenswrapper[5028]: E1123 07:12:55.429073 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:12:55 crc kubenswrapper[5028]: E1123 07:12:55.430920 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:12:55 crc kubenswrapper[5028]: E1123 07:12:55.432090 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:12:55 crc kubenswrapper[5028]: E1123 07:12:55.432126 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" containerName="nova-scheduler-scheduler" Nov 23 07:12:55 crc kubenswrapper[5028]: I1123 07:12:55.491461 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a92efa4-12b6-431d-8aa7-9baa545f7e07","Type":"ContainerStarted","Data":"99a6577748e15bd2bece4c27f21d4ea6a0643f13d49a8c18aad3363986e0a1f8"} Nov 23 07:12:56 crc kubenswrapper[5028]: I1123 07:12:56.504987 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a92efa4-12b6-431d-8aa7-9baa545f7e07","Type":"ContainerStarted","Data":"b98edfeee83e098dad0a822a2d765cd74a15be4763ca4b5c2fbb0a7a9fda7f9f"} Nov 23 07:12:56 crc kubenswrapper[5028]: I1123 07:12:56.505261 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a92efa4-12b6-431d-8aa7-9baa545f7e07","Type":"ContainerStarted","Data":"30582d733bd8b84f7eb9843365d1cb30de06952daa9b64528d1e925263ed1ee5"} Nov 23 07:12:56 crc kubenswrapper[5028]: I1123 07:12:56.528284 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.528264921 podStartE2EDuration="2.528264921s" podCreationTimestamp="2025-11-23 07:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:56.5224788 +0000 UTC m=+1360.219883619" watchObservedRunningTime="2025-11-23 07:12:56.528264921 +0000 UTC m=+1360.225669700" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.282136 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.454144 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff542a1-9fb0-40e9-8428-da161db08c91-logs\") pod \"aff542a1-9fb0-40e9-8428-da161db08c91\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.454266 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-nova-metadata-tls-certs\") pod \"aff542a1-9fb0-40e9-8428-da161db08c91\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.454476 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-combined-ca-bundle\") pod \"aff542a1-9fb0-40e9-8428-da161db08c91\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.454652 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-config-data\") pod \"aff542a1-9fb0-40e9-8428-da161db08c91\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.454685 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff542a1-9fb0-40e9-8428-da161db08c91-logs" (OuterVolumeSpecName: "logs") pod "aff542a1-9fb0-40e9-8428-da161db08c91" (UID: "aff542a1-9fb0-40e9-8428-da161db08c91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.454701 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcvm2\" (UniqueName: \"kubernetes.io/projected/aff542a1-9fb0-40e9-8428-da161db08c91-kube-api-access-xcvm2\") pod \"aff542a1-9fb0-40e9-8428-da161db08c91\" (UID: \"aff542a1-9fb0-40e9-8428-da161db08c91\") " Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.455735 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aff542a1-9fb0-40e9-8428-da161db08c91-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.460144 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff542a1-9fb0-40e9-8428-da161db08c91-kube-api-access-xcvm2" (OuterVolumeSpecName: "kube-api-access-xcvm2") pod "aff542a1-9fb0-40e9-8428-da161db08c91" (UID: "aff542a1-9fb0-40e9-8428-da161db08c91"). InnerVolumeSpecName "kube-api-access-xcvm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.480790 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-config-data" (OuterVolumeSpecName: "config-data") pod "aff542a1-9fb0-40e9-8428-da161db08c91" (UID: "aff542a1-9fb0-40e9-8428-da161db08c91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.488135 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff542a1-9fb0-40e9-8428-da161db08c91" (UID: "aff542a1-9fb0-40e9-8428-da161db08c91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.509280 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "aff542a1-9fb0-40e9-8428-da161db08c91" (UID: "aff542a1-9fb0-40e9-8428-da161db08c91"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.515589 5028 generic.go:334] "Generic (PLEG): container finished" podID="aff542a1-9fb0-40e9-8428-da161db08c91" containerID="217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e" exitCode=0 Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.515663 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.515674 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aff542a1-9fb0-40e9-8428-da161db08c91","Type":"ContainerDied","Data":"217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e"} Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.515741 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"aff542a1-9fb0-40e9-8428-da161db08c91","Type":"ContainerDied","Data":"e196c30d63f4229afeee64724b4d0f844fb16073843371fe55a87d4af3b5cbcc"} Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.515768 5028 scope.go:117] "RemoveContainer" containerID="217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.557437 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.557471 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.557484 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcvm2\" (UniqueName: \"kubernetes.io/projected/aff542a1-9fb0-40e9-8428-da161db08c91-kube-api-access-xcvm2\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.557495 5028 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff542a1-9fb0-40e9-8428-da161db08c91-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.569856 5028 scope.go:117] "RemoveContainer" containerID="89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.572977 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.595776 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.606190 5028 scope.go:117] "RemoveContainer" containerID="217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e" Nov 23 07:12:57 crc kubenswrapper[5028]: E1123 07:12:57.608146 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e\": container with ID starting with 217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e not found: ID does not exist" containerID="217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.608196 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e"} err="failed to get container status \"217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e\": rpc error: code = NotFound desc = could not find container \"217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e\": container with ID starting with 217db901872e4bf4553bf8460ddc7918da6766d78727f2bf40ef265f6dce135e not found: ID does not exist" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.608226 5028 scope.go:117] "RemoveContainer" containerID="89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.611028 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:57 crc kubenswrapper[5028]: E1123 07:12:57.611545 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-metadata" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.611562 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-metadata" Nov 23 07:12:57 crc kubenswrapper[5028]: E1123 07:12:57.611613 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-log" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.611621 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-log" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.611818 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-log" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.611839 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" containerName="nova-metadata-metadata" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.613139 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: E1123 07:12:57.614186 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151\": container with ID starting with 89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151 not found: ID does not exist" containerID="89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.614222 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151"} err="failed to get container status \"89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151\": rpc error: code = NotFound desc = could not find container \"89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151\": container with ID starting with 89a444f5ff2cd3ccf8a161160b960eff05593f55408bd3c6c1bccb57d9950151 not found: ID does not exist" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.617874 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.618168 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.627773 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.760812 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.760855 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.760911 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-logs\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.760997 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.761050 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrfw\" (UniqueName: \"kubernetes.io/projected/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-kube-api-access-jcrfw\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.862986 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.863039 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.863088 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-logs\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.863133 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.863172 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcrfw\" (UniqueName: \"kubernetes.io/projected/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-kube-api-access-jcrfw\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.863710 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-logs\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.868004 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.868233 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.868874 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.884374 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcrfw\" (UniqueName: \"kubernetes.io/projected/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-kube-api-access-jcrfw\") pod \"nova-metadata-0\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " pod="openstack/nova-metadata-0" Nov 23 07:12:57 crc kubenswrapper[5028]: I1123 07:12:57.937934 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:12:58 crc kubenswrapper[5028]: I1123 07:12:58.392020 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:12:58 crc kubenswrapper[5028]: W1123 07:12:58.392392 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a15e7b9_a2f9_41bb_bdf0_0d474eabb2ab.slice/crio-8c1c06272ffce2f8fa6d5f9459b161d0f84f0b98e6629af52efa09c5832a0791 WatchSource:0}: Error finding container 8c1c06272ffce2f8fa6d5f9459b161d0f84f0b98e6629af52efa09c5832a0791: Status 404 returned error can't find the container with id 8c1c06272ffce2f8fa6d5f9459b161d0f84f0b98e6629af52efa09c5832a0791 Nov 23 07:12:58 crc kubenswrapper[5028]: I1123 07:12:58.526829 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab","Type":"ContainerStarted","Data":"8c1c06272ffce2f8fa6d5f9459b161d0f84f0b98e6629af52efa09c5832a0791"} Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.062700 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff542a1-9fb0-40e9-8428-da161db08c91" path="/var/lib/kubelet/pods/aff542a1-9fb0-40e9-8428-da161db08c91/volumes" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.329028 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.490315 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-combined-ca-bundle\") pod \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.490448 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-config-data\") pod \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.490536 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqqt8\" (UniqueName: \"kubernetes.io/projected/54b90a4e-034d-4c8d-bf93-ed27f5467b32-kube-api-access-fqqt8\") pod \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\" (UID: \"54b90a4e-034d-4c8d-bf93-ed27f5467b32\") " Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.494824 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b90a4e-034d-4c8d-bf93-ed27f5467b32-kube-api-access-fqqt8" (OuterVolumeSpecName: "kube-api-access-fqqt8") pod "54b90a4e-034d-4c8d-bf93-ed27f5467b32" (UID: "54b90a4e-034d-4c8d-bf93-ed27f5467b32"). InnerVolumeSpecName "kube-api-access-fqqt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.521901 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-config-data" (OuterVolumeSpecName: "config-data") pod "54b90a4e-034d-4c8d-bf93-ed27f5467b32" (UID: "54b90a4e-034d-4c8d-bf93-ed27f5467b32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.522537 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54b90a4e-034d-4c8d-bf93-ed27f5467b32" (UID: "54b90a4e-034d-4c8d-bf93-ed27f5467b32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.539777 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab","Type":"ContainerStarted","Data":"5dc8cea2a082f78c34b4f793fb541c20becf1183832ccdc56cdb5c470fec475a"} Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.539820 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab","Type":"ContainerStarted","Data":"fb829f046617ccb1432e54fee659ef7e9fceb2ee18d795f15b09ddc7e9e8e047"} Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.541678 5028 generic.go:334] "Generic (PLEG): container finished" podID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" exitCode=0 Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.541720 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54b90a4e-034d-4c8d-bf93-ed27f5467b32","Type":"ContainerDied","Data":"9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81"} Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.541723 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.541761 5028 scope.go:117] "RemoveContainer" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.541747 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54b90a4e-034d-4c8d-bf93-ed27f5467b32","Type":"ContainerDied","Data":"6731bc3f10bf50f12b836e9b9fe562d7ff948c204dea908ece1c925856a80014"} Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.568413 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5683925370000003 podStartE2EDuration="2.568392537s" podCreationTimestamp="2025-11-23 07:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:12:59.557731256 +0000 UTC m=+1363.255136045" watchObservedRunningTime="2025-11-23 07:12:59.568392537 +0000 UTC m=+1363.265797316" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.583072 5028 scope.go:117] "RemoveContainer" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" Nov 23 07:12:59 crc kubenswrapper[5028]: E1123 07:12:59.583660 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81\": container with ID starting with 9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81 not found: ID does not exist" containerID="9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.583708 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81"} err="failed to get container status \"9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81\": rpc error: code = NotFound desc = could not find container \"9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81\": container with ID starting with 9392dc06585bd915413822e089eda05e8fd5ddd8a48b24597fb314ba49741c81 not found: ID does not exist" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.592915 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.592966 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54b90a4e-034d-4c8d-bf93-ed27f5467b32-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.592981 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqqt8\" (UniqueName: \"kubernetes.io/projected/54b90a4e-034d-4c8d-bf93-ed27f5467b32-kube-api-access-fqqt8\") on node \"crc\" DevicePath \"\"" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.602419 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.631377 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.642912 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:59 crc kubenswrapper[5028]: E1123 07:12:59.643546 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" containerName="nova-scheduler-scheduler" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.643566 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" containerName="nova-scheduler-scheduler" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.643921 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" containerName="nova-scheduler-scheduler" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.644912 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.650449 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.654688 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.796474 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-config-data\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.796911 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hng4t\" (UniqueName: \"kubernetes.io/projected/c60c122c-3364-4717-bc34-5a610c1a1ac8-kube-api-access-hng4t\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.797003 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.898237 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hng4t\" (UniqueName: \"kubernetes.io/projected/c60c122c-3364-4717-bc34-5a610c1a1ac8-kube-api-access-hng4t\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.898299 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.898332 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-config-data\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.904435 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.904786 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-config-data\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.915765 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hng4t\" (UniqueName: \"kubernetes.io/projected/c60c122c-3364-4717-bc34-5a610c1a1ac8-kube-api-access-hng4t\") pod \"nova-scheduler-0\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " pod="openstack/nova-scheduler-0" Nov 23 07:12:59 crc kubenswrapper[5028]: I1123 07:12:59.964847 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:13:00 crc kubenswrapper[5028]: I1123 07:13:00.411881 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:13:00 crc kubenswrapper[5028]: I1123 07:13:00.552294 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c60c122c-3364-4717-bc34-5a610c1a1ac8","Type":"ContainerStarted","Data":"17c5ce5eca3e692849207165f20c490b180f8c5e9551ff30cb56f7c090630fd1"} Nov 23 07:13:01 crc kubenswrapper[5028]: I1123 07:13:01.070989 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b90a4e-034d-4c8d-bf93-ed27f5467b32" path="/var/lib/kubelet/pods/54b90a4e-034d-4c8d-bf93-ed27f5467b32/volumes" Nov 23 07:13:01 crc kubenswrapper[5028]: I1123 07:13:01.566264 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c60c122c-3364-4717-bc34-5a610c1a1ac8","Type":"ContainerStarted","Data":"d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79"} Nov 23 07:13:01 crc kubenswrapper[5028]: I1123 07:13:01.584619 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.584600892 podStartE2EDuration="2.584600892s" podCreationTimestamp="2025-11-23 07:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:01.579161449 +0000 UTC m=+1365.276566228" watchObservedRunningTime="2025-11-23 07:13:01.584600892 +0000 UTC m=+1365.282005671" Nov 23 07:13:02 crc kubenswrapper[5028]: I1123 07:13:02.938671 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:13:02 crc kubenswrapper[5028]: I1123 07:13:02.939046 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 07:13:04 crc kubenswrapper[5028]: I1123 07:13:04.881294 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:13:04 crc kubenswrapper[5028]: I1123 07:13:04.881720 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 07:13:04 crc kubenswrapper[5028]: I1123 07:13:04.965813 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 07:13:05 crc kubenswrapper[5028]: I1123 07:13:05.897459 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 07:13:05 crc kubenswrapper[5028]: I1123 07:13:05.906925 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.200:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:13:07 crc kubenswrapper[5028]: I1123 07:13:07.938463 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:13:07 crc kubenswrapper[5028]: I1123 07:13:07.938873 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 07:13:08 crc kubenswrapper[5028]: I1123 07:13:08.953132 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:13:08 crc kubenswrapper[5028]: I1123 07:13:08.953170 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 07:13:09 crc kubenswrapper[5028]: I1123 07:13:09.966050 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 07:13:09 crc kubenswrapper[5028]: I1123 07:13:09.995740 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 07:13:10 crc kubenswrapper[5028]: I1123 07:13:10.679625 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 07:13:13 crc kubenswrapper[5028]: I1123 07:13:13.712138 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 07:13:14 crc kubenswrapper[5028]: I1123 07:13:14.887700 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:13:14 crc kubenswrapper[5028]: I1123 07:13:14.888964 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:13:14 crc kubenswrapper[5028]: I1123 07:13:14.893284 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 07:13:14 crc kubenswrapper[5028]: I1123 07:13:14.896436 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:13:15 crc kubenswrapper[5028]: I1123 07:13:15.700314 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 07:13:15 crc kubenswrapper[5028]: I1123 07:13:15.708274 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 07:13:17 crc kubenswrapper[5028]: I1123 07:13:17.945045 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:13:17 crc kubenswrapper[5028]: I1123 07:13:17.946522 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 07:13:17 crc kubenswrapper[5028]: I1123 07:13:17.951402 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:13:18 crc kubenswrapper[5028]: I1123 07:13:18.738684 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 07:13:36 crc kubenswrapper[5028]: I1123 07:13:36.747091 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 07:13:36 crc kubenswrapper[5028]: I1123 07:13:36.747718 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="e2100d9d-d4e3-40aa-8082-e6536e2ed096" containerName="openstackclient" containerID="cri-o://378f53f6920941038405cc29b0d2089fe43431d937244b33f18351586a7ec8e8" gracePeriod=2 Nov 23 07:13:36 crc kubenswrapper[5028]: I1123 07:13:36.761223 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 07:13:36 crc kubenswrapper[5028]: I1123 07:13:36.964998 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.032981 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.033406 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="openstack-network-exporter" containerID="cri-o://1ff78194c0b6fef4a35d4ce365c5ce7a085c94cc689d51e542e7782f680a7d0a" gracePeriod=30 Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.033788 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="ovn-northd" containerID="cri-o://133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" gracePeriod=30 Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.048084 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.048157 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data podName:9a20ff76-1a5a-4070-b5ae-c8baf133c9d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:37.548138117 +0000 UTC m=+1401.245542896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.278324 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron8471-account-delete-6lm46"] Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.279109 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2100d9d-d4e3-40aa-8082-e6536e2ed096" containerName="openstackclient" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.279128 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2100d9d-d4e3-40aa-8082-e6536e2ed096" containerName="openstackclient" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.279321 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2100d9d-d4e3-40aa-8082-e6536e2ed096" containerName="openstackclient" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.281049 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron8471-account-delete-6lm46"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.281074 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.281361 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.282901 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="openstack-network-exporter" containerID="cri-o://22f3b63ee8932af549b2d73e2cdf2df0d790f7927e67756a4ac6bf3630f5b56b" gracePeriod=300 Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.355519 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqpjr\" (UniqueName: \"kubernetes.io/projected/bf69949e-5b98-4cb2-9ce6-f979da44c58e-kube-api-access-qqpjr\") pod \"neutron8471-account-delete-6lm46\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.355595 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf69949e-5b98-4cb2-9ce6-f979da44c58e-operator-scripts\") pod \"neutron8471-account-delete-6lm46\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.367892 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican5527-account-delete-7gzrg"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.369228 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.429534 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican5527-account-delete-7gzrg"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.457137 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqpjr\" (UniqueName: \"kubernetes.io/projected/bf69949e-5b98-4cb2-9ce6-f979da44c58e-kube-api-access-qqpjr\") pod \"neutron8471-account-delete-6lm46\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.457201 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf69949e-5b98-4cb2-9ce6-f979da44c58e-operator-scripts\") pod \"neutron8471-account-delete-6lm46\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.457255 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts\") pod \"barbican5527-account-delete-7gzrg\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.457291 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnpr7\" (UniqueName: \"kubernetes.io/projected/a00e6664-ab67-4532-9c12-89c6fa223993-kube-api-access-tnpr7\") pod \"barbican5527-account-delete-7gzrg\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.458247 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf69949e-5b98-4cb2-9ce6-f979da44c58e-operator-scripts\") pod \"neutron8471-account-delete-6lm46\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.465702 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.509023 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.509823 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="openstack-network-exporter" containerID="cri-o://d82fd949fe89f76a065fd1501ba17537d165f20a736688a50648e098ab446adf" gracePeriod=300 Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.565917 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts\") pod \"barbican5527-account-delete-7gzrg\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.566020 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnpr7\" (UniqueName: \"kubernetes.io/projected/a00e6664-ab67-4532-9c12-89c6fa223993-kube-api-access-tnpr7\") pod \"barbican5527-account-delete-7gzrg\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.566275 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.566324 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data podName:9a20ff76-1a5a-4070-b5ae-c8baf133c9d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:38.566309354 +0000 UTC m=+1402.263714133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.567164 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts\") pod \"barbican5527-account-delete-7gzrg\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.567593 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-7hptw"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.568660 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqpjr\" (UniqueName: \"kubernetes.io/projected/bf69949e-5b98-4cb2-9ce6-f979da44c58e-kube-api-access-qqpjr\") pod \"neutron8471-account-delete-6lm46\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.608657 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-7hptw"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.662377 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.672392 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:13:37 crc kubenswrapper[5028]: E1123 07:13:37.672502 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data podName:8399afb1-fbd2-4ce0-b980-46b317d6cfee nodeName:}" failed. No retries permitted until 2025-11-23 07:13:38.172486519 +0000 UTC m=+1401.869891298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data") pod "rabbitmq-server-0" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee") : configmap "rabbitmq-config-data" not found Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.690934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnpr7\" (UniqueName: \"kubernetes.io/projected/a00e6664-ab67-4532-9c12-89c6fa223993-kube-api-access-tnpr7\") pod \"barbican5527-account-delete-7gzrg\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.691017 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder0e8f-account-delete-xlplw"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.692222 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.722027 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lrqsm"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.722283 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-lrqsm" podUID="a2670a19-fe04-4055-905d-f9a6f8d8b0b3" containerName="openstack-network-exporter" containerID="cri-o://0c0eeb4c38181d669d383cb2ab7dff9f3df1057c5acd9ed1cce3af408d5c34a0" gracePeriod=30 Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.760216 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder0e8f-account-delete-xlplw"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.790887 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrtd\" (UniqueName: \"kubernetes.io/projected/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-kube-api-access-rsrtd\") pod \"cinder0e8f-account-delete-xlplw\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.791081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts\") pod \"cinder0e8f-account-delete-xlplw\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.795294 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-5cm8v"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.809427 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement31f4-account-delete-9dth6"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.811221 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.834899 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement31f4-account-delete-9dth6"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.860462 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="ovsdbserver-nb" containerID="cri-o://6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39" gracePeriod=300 Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.881682 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7xfsr"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.892961 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrnd\" (UniqueName: \"kubernetes.io/projected/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-kube-api-access-knrnd\") pod \"placement31f4-account-delete-9dth6\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.893020 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrtd\" (UniqueName: \"kubernetes.io/projected/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-kube-api-access-rsrtd\") pod \"cinder0e8f-account-delete-xlplw\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.893080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts\") pod \"cinder0e8f-account-delete-xlplw\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.893099 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-operator-scripts\") pod \"placement31f4-account-delete-9dth6\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.893930 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts\") pod \"cinder0e8f-account-delete-xlplw\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.895844 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fw9rb"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.903431 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.919109 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="ovsdbserver-sb" containerID="cri-o://27822d21cd40adbf5bb91d6935a6236b638c8372268ebcb48349fef896ec2c52" gracePeriod=300 Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.944557 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrtd\" (UniqueName: \"kubernetes.io/projected/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-kube-api-access-rsrtd\") pod \"cinder0e8f-account-delete-xlplw\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.948504 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fw9rb"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.970308 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6nfnj"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.979372 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6nfnj"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.993221 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance8cc3-account-delete-rmsjp"] Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.994464 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.996055 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-operator-scripts\") pod \"placement31f4-account-delete-9dth6\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:37 crc kubenswrapper[5028]: I1123 07:13:37.996179 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knrnd\" (UniqueName: \"kubernetes.io/projected/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-kube-api-access-knrnd\") pod \"placement31f4-account-delete-9dth6\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.003544 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-operator-scripts\") pod \"placement31f4-account-delete-9dth6\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.016851 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance8cc3-account-delete-rmsjp"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.041009 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrnd\" (UniqueName: \"kubernetes.io/projected/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-kube-api-access-knrnd\") pod \"placement31f4-account-delete-9dth6\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.055476 5028 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/swift-storage-0" secret="" err="secret \"swift-swift-dockercfg-t6mrr\" not found" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.083242 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.090925 5028 generic.go:334] "Generic (PLEG): container finished" podID="595ec560-1f5a-44f8-bf67-feee6223a090" containerID="d82fd949fe89f76a065fd1501ba17537d165f20a736688a50648e098ab446adf" exitCode=2 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.090997 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"595ec560-1f5a-44f8-bf67-feee6223a090","Type":"ContainerDied","Data":"d82fd949fe89f76a065fd1501ba17537d165f20a736688a50648e098ab446adf"} Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.096498 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lrqsm_a2670a19-fe04-4055-905d-f9a6f8d8b0b3/openstack-network-exporter/0.log" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.096543 5028 generic.go:334] "Generic (PLEG): container finished" podID="a2670a19-fe04-4055-905d-f9a6f8d8b0b3" containerID="0c0eeb4c38181d669d383cb2ab7dff9f3df1057c5acd9ed1cce3af408d5c34a0" exitCode=2 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.096603 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lrqsm" event={"ID":"a2670a19-fe04-4055-905d-f9a6f8d8b0b3","Type":"ContainerDied","Data":"0c0eeb4c38181d669d383cb2ab7dff9f3df1057c5acd9ed1cce3af408d5c34a0"} Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.097811 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr46m\" (UniqueName: \"kubernetes.io/projected/261d60cc-ee7d-463e-add8-4a4e8af392cd-kube-api-access-pr46m\") pod \"glance8cc3-account-delete-rmsjp\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.097848 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts\") pod \"glance8cc3-account-delete-rmsjp\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.099497 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.099517 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.099525 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.099536 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.099574 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:38.599559389 +0000 UTC m=+1402.296964168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.145526 5028 generic.go:334] "Generic (PLEG): container finished" podID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerID="22f3b63ee8932af549b2d73e2cdf2df0d790f7927e67756a4ac6bf3630f5b56b" exitCode=2 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.145872 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05065f0-2269-4a88-abdf-45d2523ac60b","Type":"ContainerDied","Data":"22f3b63ee8932af549b2d73e2cdf2df0d790f7927e67756a4ac6bf3630f5b56b"} Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.153742 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.205540 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr46m\" (UniqueName: \"kubernetes.io/projected/261d60cc-ee7d-463e-add8-4a4e8af392cd-kube-api-access-pr46m\") pod \"glance8cc3-account-delete-rmsjp\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.205798 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts\") pod \"glance8cc3-account-delete-rmsjp\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.206027 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.206111 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data podName:8399afb1-fbd2-4ce0-b980-46b317d6cfee nodeName:}" failed. No retries permitted until 2025-11-23 07:13:39.206089493 +0000 UTC m=+1402.903494262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data") pod "rabbitmq-server-0" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee") : configmap "rabbitmq-config-data" not found Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.206823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts\") pod \"glance8cc3-account-delete-rmsjp\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.224569 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell08b1e-account-delete-qctpn"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.226625 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.243433 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.245092 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.256248 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr46m\" (UniqueName: \"kubernetes.io/projected/261d60cc-ee7d-463e-add8-4a4e8af392cd-kube-api-access-pr46m\") pod \"glance8cc3-account-delete-rmsjp\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.256615 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.257774 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="ovn-northd" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.256825 5028 generic.go:334] "Generic (PLEG): container finished" podID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerID="1ff78194c0b6fef4a35d4ce365c5ce7a085c94cc689d51e542e7782f680a7d0a" exitCode=2 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.257311 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1","Type":"ContainerDied","Data":"1ff78194c0b6fef4a35d4ce365c5ce7a085c94cc689d51e542e7782f680a7d0a"} Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.381091 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.390237 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi35e8-account-delete-8zqfb"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.392517 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.415113 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfqnn\" (UniqueName: \"kubernetes.io/projected/75ea620d-8ec5-47db-a758-11a1e3c9d605-kube-api-access-pfqnn\") pod \"novacell08b1e-account-delete-qctpn\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.415301 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts\") pod \"novacell08b1e-account-delete-qctpn\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.432087 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell08b1e-account-delete-qctpn"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.475826 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-skc8g"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.487847 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-swbsc"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.495139 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi35e8-account-delete-8zqfb"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.517763 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts\") pod \"novacell08b1e-account-delete-qctpn\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.517821 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts\") pod \"novaapi35e8-account-delete-8zqfb\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.517849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfqnn\" (UniqueName: \"kubernetes.io/projected/75ea620d-8ec5-47db-a758-11a1e3c9d605-kube-api-access-pfqnn\") pod \"novacell08b1e-account-delete-qctpn\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.517896 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hz46\" (UniqueName: \"kubernetes.io/projected/c83094db-b0cb-4be4-a13b-de12d76e1fb0-kube-api-access-7hz46\") pod \"novaapi35e8-account-delete-8zqfb\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.518652 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts\") pod \"novacell08b1e-account-delete-qctpn\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.519996 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-skc8g"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.534501 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-swbsc"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.544891 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-lhkql"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.545578 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfqnn\" (UniqueName: \"kubernetes.io/projected/75ea620d-8ec5-47db-a758-11a1e3c9d605-kube-api-access-pfqnn\") pod \"novacell08b1e-account-delete-qctpn\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.571759 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-lhkql"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.589417 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56d56d656c-8p7fn"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.589713 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56d56d656c-8p7fn" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-api" containerID="cri-o://940f74017dba594cbc47228face62b55ed6f8064b06190a9c015b8bd33b0e3f6" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.590101 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-56d56d656c-8p7fn" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-httpd" containerID="cri-o://5634973a7ea3c3061dced8c30254a7c5f72ea712c7865557fc5eee14be148b26" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.611073 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55bfb77665-zk585"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.611354 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55bfb77665-zk585" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerName="dnsmasq-dns" containerID="cri-o://3f849cfb8e74b4980c92c33ab679fd9a8d36c82079aa4ad05015977a5e743ef6" gracePeriod=10 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.619437 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hz46\" (UniqueName: \"kubernetes.io/projected/c83094db-b0cb-4be4-a13b-de12d76e1fb0-kube-api-access-7hz46\") pod \"novaapi35e8-account-delete-8zqfb\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.619645 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts\") pod \"novaapi35e8-account-delete-8zqfb\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.619797 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.619815 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.619825 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.619835 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.619876 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:39.619861748 +0000 UTC m=+1403.317266527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.620052 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.620084 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data podName:9a20ff76-1a5a-4070-b5ae-c8baf133c9d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:40.620076453 +0000 UTC m=+1404.317481232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.621429 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts\") pod \"novaapi35e8-account-delete-8zqfb\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.631063 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.664012 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vj4cx"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.674249 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hz46\" (UniqueName: \"kubernetes.io/projected/c83094db-b0cb-4be4-a13b-de12d76e1fb0-kube-api-access-7hz46\") pod \"novaapi35e8-account-delete-8zqfb\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.674612 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vj4cx"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.685122 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nnh6w"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.694058 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nnh6w"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.732004 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.732270 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api-log" containerID="cri-o://60a864f23434fe7bf4df4b751d820014f9fe0d63d486f0c74863fbcfa326e877" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.732661 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api" containerID="cri-o://0eafcd1a07324ac9778cdfa4b78db65ef912e1e1d8dddb571f38dfd760d9566d" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.764499 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6b4c494dd6-rn255"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.764992 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6b4c494dd6-rn255" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-log" containerID="cri-o://f9e28eb9d85cec0a94161344fe470187e060552cb2ba5add91964b16fd771169" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.765376 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6b4c494dd6-rn255" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-api" containerID="cri-o://5c6c526d79aa0e5f2a9c03d7440a3625e79fc7e6164cb907a3f55aad201ead50" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.787505 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.787815 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="cinder-scheduler" containerID="cri-o://184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.788252 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="probe" containerID="cri-o://d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.804485 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.818031 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.818287 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-log" containerID="cri-o://8074037e96098a8a2eef2221bb33919ab66ce6682e6fcf1f6adb64b678e2bbed" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.818712 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-httpd" containerID="cri-o://4217f079f64973f5810531f58f78f10d09ef89a5b6288de55515155677e95e0a" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.833785 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.834074 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-log" containerID="cri-o://8b01950aabfee244fdd553d635003949a329f35cf0bf54c41d11700015415cf0" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.834486 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-httpd" containerID="cri-o://ac4ead67e47260cf3f74b78ab8afc2ccc45af0107cfaafca7edd1336fddcee80" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.845552 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c59f98478-vbp6r"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.845853 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener-log" containerID="cri-o://d694a3f6d26807a42af11ccff6f7020afb5c194adb048d335e4b7a16a625a72f" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.846286 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener" containerID="cri-o://e31414b733266ebd168e3f95a8474d2ad5c2d2b753b7b52893516e95d7e66b97" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.859121 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59c8549d57-5f4m7"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.859327 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59c8549d57-5f4m7" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker-log" containerID="cri-o://8cdb5b500524c4ab468eb818cb3106416ad89266ed3a6334d118c5a750b1d5a5" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.859657 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-59c8549d57-5f4m7" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker" containerID="cri-o://ea5fcb780cb3db6a3d6792d1be448395cc967da3e44687458ed85dea64130699" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.872296 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d87c9f496-cstmz"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.872557 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d87c9f496-cstmz" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api-log" containerID="cri-o://c9b884444e010e2f9bac9f3e6dce5c53204fff813817cf07388304fa6d747bab" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.872701 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7d87c9f496-cstmz" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api" containerID="cri-o://f479f9aaf79c6e583bdbd977ec74f93da1e58b297f19fd2858b002f4f930227c" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.885811 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.912878 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.913139 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-log" containerID="cri-o://30582d733bd8b84f7eb9843365d1cb30de06952daa9b64528d1e925263ed1ee5" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.913512 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-api" containerID="cri-o://b98edfeee83e098dad0a822a2d765cd74a15be4763ca4b5c2fbb0a7a9fda7f9f" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.942435 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerName="rabbitmq" containerID="cri-o://a956eb2eee86d43afc41626c37352de689349467bd476d6a7ecbf7c28a1afb07" gracePeriod=604800 Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.942942 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39 is running failed: container process not found" containerID="6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.943353 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.947206 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39 is running failed: container process not found" containerID="6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.967347 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39 is running failed: container process not found" containerID="6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 23 07:13:38 crc kubenswrapper[5028]: E1123 07:13:38.967403 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="ovsdbserver-nb" Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.982608 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.982923 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-log" containerID="cri-o://fb829f046617ccb1432e54fee659ef7e9fceb2ee18d795f15b09ddc7e9e8e047" gracePeriod=30 Nov 23 07:13:38 crc kubenswrapper[5028]: I1123 07:13:38.983153 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-metadata" containerID="cri-o://5dc8cea2a082f78c34b4f793fb541c20becf1183832ccdc56cdb5c470fec475a" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.055017 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-dh5vx"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.115750 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerName="rabbitmq" containerID="cri-o://54f05a9f66c1cff1703e9f91ab40df9d1d549e086aad84a05f1c7861710e604f" gracePeriod=604800 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.176192 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.176828 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.182708 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0625eec8-1472-4c19-8ebd-c2a9260a5231" path="/var/lib/kubelet/pods/0625eec8-1472-4c19-8ebd-c2a9260a5231/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.183511 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17" path="/var/lib/kubelet/pods/4aeb4bda-1f2d-4118-9cc6-ac8f77bd4f17/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.184171 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cfda6b9-44dc-4c93-9013-bf315a3bf92d" path="/var/lib/kubelet/pods/7cfda6b9-44dc-4c93-9013-bf315a3bf92d/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.185448 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8373ad19-cd11-4d27-8936-27132ab9bf72" path="/var/lib/kubelet/pods/8373ad19-cd11-4d27-8936-27132ab9bf72/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.194023 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c7363e-bafb-4e60-87bb-bb66f77d5943" path="/var/lib/kubelet/pods/b1c7363e-bafb-4e60-87bb-bb66f77d5943/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.203363 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccfd807-1efe-4af5-b0a2-45752a3774ee" path="/var/lib/kubelet/pods/bccfd807-1efe-4af5-b0a2-45752a3774ee/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.204157 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8685a3-3877-4017-b2f4-69474e17a008" path="/var/lib/kubelet/pods/df8685a3-3877-4017-b2f4-69474e17a008/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205077 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e034942e-8d84-42d0-9515-6005be425e0d" path="/var/lib/kubelet/pods/e034942e-8d84-42d0-9515-6005be425e0d/volumes" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205675 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-dh5vx"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205709 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17ad-account-create-jc5w9"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205727 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-17ad-account-create-jc5w9"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205750 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205781 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8v4x"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205794 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205813 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8v4x"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205830 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-db4rd"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205842 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.205870 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-db4rd"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.206093 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerName="nova-cell1-conductor-conductor" containerID="cri-o://a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.206852 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.211294 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" containerName="nova-cell0-conductor-conductor" containerID="cri-o://718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.214317 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lrqsm_a2670a19-fe04-4055-905d-f9a6f8d8b0b3/openstack-network-exporter/0.log" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.214422 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.238979 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.239046 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data podName:8399afb1-fbd2-4ce0-b980-46b317d6cfee nodeName:}" failed. No retries permitted until 2025-11-23 07:13:41.239024644 +0000 UTC m=+1404.936429423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data") pod "rabbitmq-server-0" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee") : configmap "rabbitmq-config-data" not found Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.246053 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" containerID="cri-o://acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" gracePeriod=29 Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.291150 5028 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 23 07:13:39 crc kubenswrapper[5028]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 23 07:13:39 crc kubenswrapper[5028]: + source /usr/local/bin/container-scripts/functions Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNBridge=br-int Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNRemote=tcp:localhost:6642 Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNEncapType=geneve Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNAvailabilityZones= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ EnableChassisAsGateway=true Nov 23 07:13:39 crc kubenswrapper[5028]: ++ PhysicalNetworks= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNHostName= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 23 07:13:39 crc kubenswrapper[5028]: ++ ovs_dir=/var/lib/openvswitch Nov 23 07:13:39 crc kubenswrapper[5028]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 23 07:13:39 crc kubenswrapper[5028]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 23 07:13:39 crc kubenswrapper[5028]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + sleep 0.5 Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + sleep 0.5 Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + cleanup_ovsdb_server_semaphore Nov 23 07:13:39 crc kubenswrapper[5028]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:13:39 crc kubenswrapper[5028]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 23 07:13:39 crc kubenswrapper[5028]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-5cm8v" message=< Nov 23 07:13:39 crc kubenswrapper[5028]: Exiting ovsdb-server (5) [ OK ] Nov 23 07:13:39 crc kubenswrapper[5028]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 23 07:13:39 crc kubenswrapper[5028]: + source /usr/local/bin/container-scripts/functions Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNBridge=br-int Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNRemote=tcp:localhost:6642 Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNEncapType=geneve Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNAvailabilityZones= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ EnableChassisAsGateway=true Nov 23 07:13:39 crc kubenswrapper[5028]: ++ PhysicalNetworks= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNHostName= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 23 07:13:39 crc kubenswrapper[5028]: ++ ovs_dir=/var/lib/openvswitch Nov 23 07:13:39 crc kubenswrapper[5028]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 23 07:13:39 crc kubenswrapper[5028]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 23 07:13:39 crc kubenswrapper[5028]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + sleep 0.5 Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + sleep 0.5 Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + cleanup_ovsdb_server_semaphore Nov 23 07:13:39 crc kubenswrapper[5028]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:13:39 crc kubenswrapper[5028]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 23 07:13:39 crc kubenswrapper[5028]: > Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.291535 5028 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 23 07:13:39 crc kubenswrapper[5028]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 23 07:13:39 crc kubenswrapper[5028]: + source /usr/local/bin/container-scripts/functions Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNBridge=br-int Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNRemote=tcp:localhost:6642 Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNEncapType=geneve Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNAvailabilityZones= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ EnableChassisAsGateway=true Nov 23 07:13:39 crc kubenswrapper[5028]: ++ PhysicalNetworks= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ OVNHostName= Nov 23 07:13:39 crc kubenswrapper[5028]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 23 07:13:39 crc kubenswrapper[5028]: ++ ovs_dir=/var/lib/openvswitch Nov 23 07:13:39 crc kubenswrapper[5028]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 23 07:13:39 crc kubenswrapper[5028]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 23 07:13:39 crc kubenswrapper[5028]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + sleep 0.5 Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + sleep 0.5 Nov 23 07:13:39 crc kubenswrapper[5028]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 23 07:13:39 crc kubenswrapper[5028]: + cleanup_ovsdb_server_semaphore Nov 23 07:13:39 crc kubenswrapper[5028]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 23 07:13:39 crc kubenswrapper[5028]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 23 07:13:39 crc kubenswrapper[5028]: > pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" containerID="cri-o://87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.291575 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" containerID="cri-o://87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" gracePeriod=29 Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.319076 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.324863 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.333314 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.333395 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerName="nova-cell1-conductor-conductor" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.341327 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxvs8\" (UniqueName: \"kubernetes.io/projected/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-kube-api-access-sxvs8\") pod \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.341373 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovs-rundir\") pod \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.341405 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovn-rundir\") pod \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.341465 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-metrics-certs-tls-certs\") pod \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.341524 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-combined-ca-bundle\") pod \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.341547 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-config\") pod \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\" (UID: \"a2670a19-fe04-4055-905d-f9a6f8d8b0b3\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.343791 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-config" (OuterVolumeSpecName: "config") pod "a2670a19-fe04-4055-905d-f9a6f8d8b0b3" (UID: "a2670a19-fe04-4055-905d-f9a6f8d8b0b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.352345 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "a2670a19-fe04-4055-905d-f9a6f8d8b0b3" (UID: "a2670a19-fe04-4055-905d-f9a6f8d8b0b3"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.352545 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "a2670a19-fe04-4055-905d-f9a6f8d8b0b3" (UID: "a2670a19-fe04-4055-905d-f9a6f8d8b0b3"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.352987 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-kube-api-access-sxvs8" (OuterVolumeSpecName: "kube-api-access-sxvs8") pod "a2670a19-fe04-4055-905d-f9a6f8d8b0b3" (UID: "a2670a19-fe04-4055-905d-f9a6f8d8b0b3"). InnerVolumeSpecName "kube-api-access-sxvs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.354126 5028 generic.go:334] "Generic (PLEG): container finished" podID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerID="f9e28eb9d85cec0a94161344fe470187e060552cb2ba5add91964b16fd771169" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.354365 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4c494dd6-rn255" event={"ID":"2a3fd963-7f73-4069-a993-1dfec6751c57","Type":"ContainerDied","Data":"f9e28eb9d85cec0a94161344fe470187e060552cb2ba5add91964b16fd771169"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.392630 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05065f0-2269-4a88-abdf-45d2523ac60b/ovsdbserver-nb/0.log" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.392683 5028 generic.go:334] "Generic (PLEG): container finished" podID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerID="6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.393971 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05065f0-2269-4a88-abdf-45d2523ac60b","Type":"ContainerDied","Data":"6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.407409 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_595ec560-1f5a-44f8-bf67-feee6223a090/ovsdbserver-sb/0.log" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.407466 5028 generic.go:334] "Generic (PLEG): container finished" podID="595ec560-1f5a-44f8-bf67-feee6223a090" containerID="27822d21cd40adbf5bb91d6935a6236b638c8372268ebcb48349fef896ec2c52" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.407530 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"595ec560-1f5a-44f8-bf67-feee6223a090","Type":"ContainerDied","Data":"27822d21cd40adbf5bb91d6935a6236b638c8372268ebcb48349fef896ec2c52"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.409155 5028 generic.go:334] "Generic (PLEG): container finished" podID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerID="fb829f046617ccb1432e54fee659ef7e9fceb2ee18d795f15b09ddc7e9e8e047" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.409207 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab","Type":"ContainerDied","Data":"fb829f046617ccb1432e54fee659ef7e9fceb2ee18d795f15b09ddc7e9e8e047"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.413066 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2670a19-fe04-4055-905d-f9a6f8d8b0b3" (UID: "a2670a19-fe04-4055-905d-f9a6f8d8b0b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.415500 5028 generic.go:334] "Generic (PLEG): container finished" podID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerID="8074037e96098a8a2eef2221bb33919ab66ce6682e6fcf1f6adb64b678e2bbed" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.415580 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d35d26e-c3f7-4597-80c6-60358f2d2c21","Type":"ContainerDied","Data":"8074037e96098a8a2eef2221bb33919ab66ce6682e6fcf1f6adb64b678e2bbed"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.429420 5028 generic.go:334] "Generic (PLEG): container finished" podID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerID="c9b884444e010e2f9bac9f3e6dce5c53204fff813817cf07388304fa6d747bab" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.429484 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d87c9f496-cstmz" event={"ID":"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c","Type":"ContainerDied","Data":"c9b884444e010e2f9bac9f3e6dce5c53204fff813817cf07388304fa6d747bab"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.432684 5028 generic.go:334] "Generic (PLEG): container finished" podID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerID="30582d733bd8b84f7eb9843365d1cb30de06952daa9b64528d1e925263ed1ee5" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.432745 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a92efa4-12b6-431d-8aa7-9baa545f7e07","Type":"ContainerDied","Data":"30582d733bd8b84f7eb9843365d1cb30de06952daa9b64528d1e925263ed1ee5"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.456516 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxvs8\" (UniqueName: \"kubernetes.io/projected/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-kube-api-access-sxvs8\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.456542 5028 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.456551 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.456561 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.456571 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.459152 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerName="galera" containerID="cri-o://def0d1a5d1d2c89fcd962a46215131a3c686adec6de8fc5a5ece7cb87528bfac" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.461288 5028 generic.go:334] "Generic (PLEG): container finished" podID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerID="d694a3f6d26807a42af11ccff6f7020afb5c194adb048d335e4b7a16a625a72f" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.461510 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" event={"ID":"534edd12-5e24-4d56-9f1a-944e1ed4a65b","Type":"ContainerDied","Data":"d694a3f6d26807a42af11ccff6f7020afb5c194adb048d335e4b7a16a625a72f"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.464331 5028 generic.go:334] "Generic (PLEG): container finished" podID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerID="3f849cfb8e74b4980c92c33ab679fd9a8d36c82079aa4ad05015977a5e743ef6" exitCode=0 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.464931 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55bfb77665-zk585" event={"ID":"02a942e1-e2f6-45ea-829d-70d45cca4860","Type":"ContainerDied","Data":"3f849cfb8e74b4980c92c33ab679fd9a8d36c82079aa4ad05015977a5e743ef6"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.499338 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.499526 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerName="nova-scheduler-scheduler" containerID="cri-o://d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.506315 5028 generic.go:334] "Generic (PLEG): container finished" podID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerID="8b01950aabfee244fdd553d635003949a329f35cf0bf54c41d11700015415cf0" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.506394 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9cf78ba6-9116-42cf-8be5-809dd912646c","Type":"ContainerDied","Data":"8b01950aabfee244fdd553d635003949a329f35cf0bf54c41d11700015415cf0"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.539102 5028 generic.go:334] "Generic (PLEG): container finished" podID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerID="5634973a7ea3c3061dced8c30254a7c5f72ea712c7865557fc5eee14be148b26" exitCode=0 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.539477 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d56d656c-8p7fn" event={"ID":"f7d02425-cddb-44e9-983a-456b2dc4d6fe","Type":"ContainerDied","Data":"5634973a7ea3c3061dced8c30254a7c5f72ea712c7865557fc5eee14be148b26"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.576113 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.195:6080/vnc_lite.html\": dial tcp 10.217.0.195:6080: connect: connection refused" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.576893 5028 generic.go:334] "Generic (PLEG): container finished" podID="023257e8-ab54-4423-94bc-1f8d547afa69" containerID="60a864f23434fe7bf4df4b751d820014f9fe0d63d486f0c74863fbcfa326e877" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.576989 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"023257e8-ab54-4423-94bc-1f8d547afa69","Type":"ContainerDied","Data":"60a864f23434fe7bf4df4b751d820014f9fe0d63d486f0c74863fbcfa326e877"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.587906 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a2670a19-fe04-4055-905d-f9a6f8d8b0b3" (UID: "a2670a19-fe04-4055-905d-f9a6f8d8b0b3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.606565 5028 generic.go:334] "Generic (PLEG): container finished" podID="e2100d9d-d4e3-40aa-8082-e6536e2ed096" containerID="378f53f6920941038405cc29b0d2089fe43431d937244b33f18351586a7ec8e8" exitCode=137 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.609795 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_595ec560-1f5a-44f8-bf67-feee6223a090/ovsdbserver-sb/0.log" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.609880 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.621658 5028 generic.go:334] "Generic (PLEG): container finished" podID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerID="8cdb5b500524c4ab468eb818cb3106416ad89266ed3a6334d118c5a750b1d5a5" exitCode=143 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.621774 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c8549d57-5f4m7" event={"ID":"83c6472a-6c69-45ff-b0c2-3bf66e0523f2","Type":"ContainerDied","Data":"8cdb5b500524c4ab468eb818cb3106416ad89266ed3a6334d118c5a750b1d5a5"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.634765 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lrqsm_a2670a19-fe04-4055-905d-f9a6f8d8b0b3/openstack-network-exporter/0.log" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.634973 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lrqsm" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635005 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lrqsm" event={"ID":"a2670a19-fe04-4055-905d-f9a6f8d8b0b3","Type":"ContainerDied","Data":"4713cde39842a8fcea9276d63cc598206b60c53ed8dbf0457ffc7607bb3aa358"} Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635082 5028 scope.go:117] "RemoveContainer" containerID="0c0eeb4c38181d669d383cb2ab7dff9f3df1057c5acd9ed1cce3af408d5c34a0" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635254 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-server" containerID="cri-o://284fcc70f4d39f784940ff357d718a373de8c2e8881a64f54fa7a0acceaadf32" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635273 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-replicator" containerID="cri-o://3758a8edb86f9cc2311dfbbc6420a20b7bb4456290271a1c277bd6e7daaf2d0b" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635344 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-server" containerID="cri-o://9b2449d180857fa287537e9f2caa3d5b2c6ef6945336b39d4b2e2f1bba6f48e5" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635382 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-updater" containerID="cri-o://f7f3db02234e290a3cc4660fddd3b6ddc3c347130630535c71abc6cc72896ac8" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635412 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-auditor" containerID="cri-o://34d6b82184b9d4d53e4cb202e9b47148b5aa74237f7bd04d23d2f1b5f8f45fee" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635447 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-replicator" containerID="cri-o://93de291167c5b5543ba1d794eabbe63b56604ecdcef6943568578c5bb4a29229" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635481 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-server" containerID="cri-o://f33d8257c3344d1c41e045d276b50855aed400ece4dc05b5dbad0b7e7e645ec1" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635526 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-reaper" containerID="cri-o://51780654d47b4748040c6f8ea75ab63207224c0aa1e348e73e20d5e202474d89" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635554 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-auditor" containerID="cri-o://75bf1a6baebd3a81179a9db585726101ffd93df8b27cb4e5da1fb372c8b6ce89" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635588 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-replicator" containerID="cri-o://2cff70916c4b86894d8212d9f22c1034cf16138344b62b7e140d3c041d91cee7" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635734 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-expirer" containerID="cri-o://2df62fa2c1e66a404dc3b2733961ec36cec00bf7f77ead77869b439ade6e92b1" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635776 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="swift-recon-cron" containerID="cri-o://1aafd0f3d20a9763f9c844dde7d914b68a8c9b6c1813f1ef8f09835c63225eb8" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.635806 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="rsync" containerID="cri-o://40974909c6c254ecbda3968c5a7f724e9828b2c9c3d631d0c9e90be6edb3e66d" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.636068 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-updater" containerID="cri-o://81a687eb71e007a1ac13feb65bbc68b1c3f2bf021519d85e91cd16f4d603b2f9" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.636174 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-auditor" containerID="cri-o://3457d5419e6677ab415baff0a0c4f5bcfce5e9c08759e54ca60a97c9fe0f0b09" gracePeriod=30 Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.669067 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2670a19-fe04-4055-905d-f9a6f8d8b0b3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.669380 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.669448 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.669461 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.669474 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:39 crc kubenswrapper[5028]: E1123 07:13:39.671453 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:41.671426314 +0000 UTC m=+1405.368831093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.762165 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05065f0-2269-4a88-abdf-45d2523ac60b/ovsdbserver-nb/0.log" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.762274 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.771778 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-metrics-certs-tls-certs\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.771829 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdbserver-sb-tls-certs\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.771922 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.771971 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-config\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.772028 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-combined-ca-bundle\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.772093 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wxft\" (UniqueName: \"kubernetes.io/projected/595ec560-1f5a-44f8-bf67-feee6223a090-kube-api-access-6wxft\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.772133 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdb-rundir\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.772158 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-scripts\") pod \"595ec560-1f5a-44f8-bf67-feee6223a090\" (UID: \"595ec560-1f5a-44f8-bf67-feee6223a090\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.774518 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-config" (OuterVolumeSpecName: "config") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.776211 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.781118 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-scripts" (OuterVolumeSpecName: "scripts") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.783514 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595ec560-1f5a-44f8-bf67-feee6223a090-kube-api-access-6wxft" (OuterVolumeSpecName: "kube-api-access-6wxft") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "kube-api-access-6wxft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.794189 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.875789 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27hf4\" (UniqueName: \"kubernetes.io/projected/b05065f0-2269-4a88-abdf-45d2523ac60b-kube-api-access-27hf4\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876252 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-scripts\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876280 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdbserver-nb-tls-certs\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876298 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-metrics-certs-tls-certs\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876405 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-combined-ca-bundle\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876440 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-config\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876463 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdb-rundir\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876495 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b05065f0-2269-4a88-abdf-45d2523ac60b\" (UID: \"b05065f0-2269-4a88-abdf-45d2523ac60b\") " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876895 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876907 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876917 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wxft\" (UniqueName: \"kubernetes.io/projected/595ec560-1f5a-44f8-bf67-feee6223a090-kube-api-access-6wxft\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876927 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.876935 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595ec560-1f5a-44f8-bf67-feee6223a090-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.880416 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.880431 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-config" (OuterVolumeSpecName: "config") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.881041 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-scripts" (OuterVolumeSpecName: "scripts") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.882684 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05065f0-2269-4a88-abdf-45d2523ac60b-kube-api-access-27hf4" (OuterVolumeSpecName: "kube-api-access-27hf4") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "kube-api-access-27hf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:39 crc kubenswrapper[5028]: I1123 07:13:39.906968 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.932927 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.940373 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.962118 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:39.980830 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983081 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983109 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983122 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05065f0-2269-4a88-abdf-45d2523ac60b-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983132 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983170 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983179 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983189 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:39.983198 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27hf4\" (UniqueName: \"kubernetes.io/projected/b05065f0-2269-4a88-abdf-45d2523ac60b-kube-api-access-27hf4\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:39.983746 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:39.984926 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:39.985016 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerName="nova-scheduler-scheduler" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.036795 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.058164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.071237 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "595ec560-1f5a-44f8-bf67-feee6223a090" (UID: "595ec560-1f5a-44f8-bf67-feee6223a090"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.071396 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.085094 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.085118 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.085127 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.085136 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/595ec560-1f5a-44f8-bf67-feee6223a090-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.215111 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b05065f0-2269-4a88-abdf-45d2523ac60b" (UID: "b05065f0-2269-4a88-abdf-45d2523ac60b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.228523 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-77b69c59d9-28nfd"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.228756 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-77b69c59d9-28nfd" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-httpd" containerID="cri-o://c2313efa3dc196f747bb767207978f9b4f70c79524bd34c535de3bdf4ae01e56" gracePeriod=30 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.230432 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-77b69c59d9-28nfd" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-server" containerID="cri-o://2a139570e27e4d3f8e409cd6102e980a7101788faa70c851a25210cd2a440e17" gracePeriod=30 Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.276590 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00067261_cd23_4c2f_8be4_24b01eaac580.slice/crio-conmon-40974909c6c254ecbda3968c5a7f724e9828b2c9c3d631d0c9e90be6edb3e66d.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.290985 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05065f0-2269-4a88-abdf-45d2523ac60b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.355476 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.356604 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.356899 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.357052 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.364430 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.367589 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.374979 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.375057 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.636081 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron8471-account-delete-6lm46"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.683088 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55bfb77665-zk585" event={"ID":"02a942e1-e2f6-45ea-829d-70d45cca4860","Type":"ContainerDied","Data":"1753c746513cb1c59737ba7752ab926efd75e39020faec8204523e61848a46e7"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.683428 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1753c746513cb1c59737ba7752ab926efd75e39020faec8204523e61848a46e7" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.701606 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.701674 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data podName:9a20ff76-1a5a-4070-b5ae-c8baf133c9d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:44.701645477 +0000 UTC m=+1408.399050256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.708834 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.712402 5028 generic.go:334] "Generic (PLEG): container finished" podID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerID="def0d1a5d1d2c89fcd962a46215131a3c686adec6de8fc5a5ece7cb87528bfac" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.712472 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3","Type":"ContainerDied","Data":"def0d1a5d1d2c89fcd962a46215131a3c686adec6de8fc5a5ece7cb87528bfac"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.715643 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.720182 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_595ec560-1f5a-44f8-bf67-feee6223a090/ovsdbserver-sb/0.log" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.720256 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"595ec560-1f5a-44f8-bf67-feee6223a090","Type":"ContainerDied","Data":"b8736f8c8d7aa53e8f93d55683a720e38fd59d08030678c5d9bfaaefe59bdb9f"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.720290 5028 scope.go:117] "RemoveContainer" containerID="d82fd949fe89f76a065fd1501ba17537d165f20a736688a50648e098ab446adf" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.720376 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.728264 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.732362 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-lrqsm"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.738823 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-lrqsm"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.739743 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.742333 5028 generic.go:334] "Generic (PLEG): container finished" podID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerID="d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.742367 5028 generic.go:334] "Generic (PLEG): container finished" podID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerID="184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.742447 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4","Type":"ContainerDied","Data":"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.742471 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4","Type":"ContainerDied","Data":"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.742483 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4","Type":"ContainerDied","Data":"ef802c5878cc69c20eef6443ba71479d3d7090b1070300612dc4ff60af1999ef"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.750707 5028 generic.go:334] "Generic (PLEG): container finished" podID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerID="c2313efa3dc196f747bb767207978f9b4f70c79524bd34c535de3bdf4ae01e56" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.750792 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b69c59d9-28nfd" event={"ID":"230d8024-5d83-4742-9bf9-77bc956dd4a9","Type":"ContainerDied","Data":"c2313efa3dc196f747bb767207978f9b4f70c79524bd34c535de3bdf4ae01e56"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.763287 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.769718 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b05065f0-2269-4a88-abdf-45d2523ac60b/ovsdbserver-nb/0.log" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.770139 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.770345 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b05065f0-2269-4a88-abdf-45d2523ac60b","Type":"ContainerDied","Data":"eaab37fc9a9408fa10e98dcae012556c1b0b13d8daca6339c5dc74f925aef15e"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.794545 5028 scope.go:117] "RemoveContainer" containerID="27822d21cd40adbf5bb91d6935a6236b638c8372268ebcb48349fef896ec2c52" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802432 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwg29\" (UniqueName: \"kubernetes.io/projected/02a942e1-e2f6-45ea-829d-70d45cca4860-kube-api-access-hwg29\") pod \"02a942e1-e2f6-45ea-829d-70d45cca4860\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802525 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-svc\") pod \"02a942e1-e2f6-45ea-829d-70d45cca4860\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802550 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbvzh\" (UniqueName: \"kubernetes.io/projected/e2100d9d-d4e3-40aa-8082-e6536e2ed096-kube-api-access-kbvzh\") pod \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802576 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-scripts\") pod \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802619 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-nb\") pod \"02a942e1-e2f6-45ea-829d-70d45cca4860\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802639 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-config-data\") pod \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802666 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data\") pod \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802687 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-vencrypt-tls-certs\") pod \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802703 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom\") pod \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802746 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config-secret\") pod \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802762 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-etc-machine-id\") pod \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802784 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-swift-storage-0\") pod \"02a942e1-e2f6-45ea-829d-70d45cca4860\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802813 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config\") pod \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802876 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg2hj\" (UniqueName: \"kubernetes.io/projected/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-kube-api-access-wg2hj\") pod \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802909 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-nova-novncproxy-tls-certs\") pod \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802927 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-combined-ca-bundle\") pod \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802961 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-sb\") pod \"02a942e1-e2f6-45ea-829d-70d45cca4860\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.802998 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-config\") pod \"02a942e1-e2f6-45ea-829d-70d45cca4860\" (UID: \"02a942e1-e2f6-45ea-829d-70d45cca4860\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.803020 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-combined-ca-bundle\") pod \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\" (UID: \"e2100d9d-d4e3-40aa-8082-e6536e2ed096\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.803041 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2wn4\" (UniqueName: \"kubernetes.io/projected/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-kube-api-access-s2wn4\") pod \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\" (UID: \"794e1c4d-3639-4b06-9a8b-5597fe8fa4c4\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.803068 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-combined-ca-bundle\") pod \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\" (UID: \"eaa89bc7-a850-429e-a0ef-c5f2906b0d18\") " Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.804568 5028 generic.go:334] "Generic (PLEG): container finished" podID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.804708 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerDied","Data":"87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.808220 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.817564 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a942e1-e2f6-45ea-829d-70d45cca4860-kube-api-access-hwg29" (OuterVolumeSpecName: "kube-api-access-hwg29") pod "02a942e1-e2f6-45ea-829d-70d45cca4860" (UID: "02a942e1-e2f6-45ea-829d-70d45cca4860"). InnerVolumeSpecName "kube-api-access-hwg29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.820615 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.833257 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-kube-api-access-wg2hj" (OuterVolumeSpecName: "kube-api-access-wg2hj") pod "eaa89bc7-a850-429e-a0ef-c5f2906b0d18" (UID: "eaa89bc7-a850-429e-a0ef-c5f2906b0d18"). InnerVolumeSpecName "kube-api-access-wg2hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.833574 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.837037 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2100d9d-d4e3-40aa-8082-e6536e2ed096-kube-api-access-kbvzh" (OuterVolumeSpecName: "kube-api-access-kbvzh") pod "e2100d9d-d4e3-40aa-8082-e6536e2ed096" (UID: "e2100d9d-d4e3-40aa-8082-e6536e2ed096"). InnerVolumeSpecName "kube-api-access-kbvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.841978 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.848152 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-kube-api-access-s2wn4" (OuterVolumeSpecName: "kube-api-access-s2wn4") pod "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4"). InnerVolumeSpecName "kube-api-access-s2wn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857572 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="40974909c6c254ecbda3968c5a7f724e9828b2c9c3d631d0c9e90be6edb3e66d" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857617 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="2df62fa2c1e66a404dc3b2733961ec36cec00bf7f77ead77869b439ade6e92b1" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857629 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="81a687eb71e007a1ac13feb65bbc68b1c3f2bf021519d85e91cd16f4d603b2f9" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857636 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="3457d5419e6677ab415baff0a0c4f5bcfce5e9c08759e54ca60a97c9fe0f0b09" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857643 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="3758a8edb86f9cc2311dfbbc6420a20b7bb4456290271a1c277bd6e7daaf2d0b" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857649 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="9b2449d180857fa287537e9f2caa3d5b2c6ef6945336b39d4b2e2f1bba6f48e5" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857655 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="f7f3db02234e290a3cc4660fddd3b6ddc3c347130630535c71abc6cc72896ac8" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857661 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="34d6b82184b9d4d53e4cb202e9b47148b5aa74237f7bd04d23d2f1b5f8f45fee" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857667 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="93de291167c5b5543ba1d794eabbe63b56604ecdcef6943568578c5bb4a29229" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857673 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="f33d8257c3344d1c41e045d276b50855aed400ece4dc05b5dbad0b7e7e645ec1" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857696 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="51780654d47b4748040c6f8ea75ab63207224c0aa1e348e73e20d5e202474d89" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857702 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="75bf1a6baebd3a81179a9db585726101ffd93df8b27cb4e5da1fb372c8b6ce89" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857707 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="2cff70916c4b86894d8212d9f22c1034cf16138344b62b7e140d3c041d91cee7" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857715 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="284fcc70f4d39f784940ff357d718a373de8c2e8881a64f54fa7a0acceaadf32" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857777 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"40974909c6c254ecbda3968c5a7f724e9828b2c9c3d631d0c9e90be6edb3e66d"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857802 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"2df62fa2c1e66a404dc3b2733961ec36cec00bf7f77ead77869b439ade6e92b1"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857813 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"81a687eb71e007a1ac13feb65bbc68b1c3f2bf021519d85e91cd16f4d603b2f9"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857841 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"3457d5419e6677ab415baff0a0c4f5bcfce5e9c08759e54ca60a97c9fe0f0b09"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857852 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"3758a8edb86f9cc2311dfbbc6420a20b7bb4456290271a1c277bd6e7daaf2d0b"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857861 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"9b2449d180857fa287537e9f2caa3d5b2c6ef6945336b39d4b2e2f1bba6f48e5"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857869 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"f7f3db02234e290a3cc4660fddd3b6ddc3c347130630535c71abc6cc72896ac8"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857877 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"34d6b82184b9d4d53e4cb202e9b47148b5aa74237f7bd04d23d2f1b5f8f45fee"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857885 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"93de291167c5b5543ba1d794eabbe63b56604ecdcef6943568578c5bb4a29229"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857893 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"f33d8257c3344d1c41e045d276b50855aed400ece4dc05b5dbad0b7e7e645ec1"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857920 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"51780654d47b4748040c6f8ea75ab63207224c0aa1e348e73e20d5e202474d89"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857930 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"75bf1a6baebd3a81179a9db585726101ffd93df8b27cb4e5da1fb372c8b6ce89"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.857938 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"2cff70916c4b86894d8212d9f22c1034cf16138344b62b7e140d3c041d91cee7"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.858002 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"284fcc70f4d39f784940ff357d718a373de8c2e8881a64f54fa7a0acceaadf32"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.858933 5028 scope.go:117] "RemoveContainer" containerID="d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.861853 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-scripts" (OuterVolumeSpecName: "scripts") pod "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.869898 5028 generic.go:334] "Generic (PLEG): container finished" podID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" containerID="26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a" exitCode=0 Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.870027 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eaa89bc7-a850-429e-a0ef-c5f2906b0d18","Type":"ContainerDied","Data":"26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.870062 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eaa89bc7-a850-429e-a0ef-c5f2906b0d18","Type":"ContainerDied","Data":"a84b8052b0b2fd22ad2f213f0fec236e8ff2e66a7d39efb2561c980961b98dc0"} Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.870130 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.870812 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.873032 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.887178 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e2100d9d-d4e3-40aa-8082-e6536e2ed096" (UID: "e2100d9d-d4e3-40aa-8082-e6536e2ed096"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.887644 5028 scope.go:117] "RemoveContainer" containerID="184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.898163 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2100d9d-d4e3-40aa-8082-e6536e2ed096" (UID: "e2100d9d-d4e3-40aa-8082-e6536e2ed096"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.905822 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg2hj\" (UniqueName: \"kubernetes.io/projected/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-kube-api-access-wg2hj\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906114 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906203 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2wn4\" (UniqueName: \"kubernetes.io/projected/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-kube-api-access-s2wn4\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906271 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwg29\" (UniqueName: \"kubernetes.io/projected/02a942e1-e2f6-45ea-829d-70d45cca4860-kube-api-access-hwg29\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906337 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbvzh\" (UniqueName: \"kubernetes.io/projected/e2100d9d-d4e3-40aa-8082-e6536e2ed096-kube-api-access-kbvzh\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906437 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906530 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906643 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.906718 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.923249 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-config-data" (OuterVolumeSpecName: "config-data") pod "eaa89bc7-a850-429e-a0ef-c5f2906b0d18" (UID: "eaa89bc7-a850-429e-a0ef-c5f2906b0d18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.938769 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa89bc7-a850-429e-a0ef-c5f2906b0d18" (UID: "eaa89bc7-a850-429e-a0ef-c5f2906b0d18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.953793 5028 scope.go:117] "RemoveContainer" containerID="d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.956316 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df\": container with ID starting with d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df not found: ID does not exist" containerID="d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.956341 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df"} err="failed to get container status \"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df\": rpc error: code = NotFound desc = could not find container \"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df\": container with ID starting with d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df not found: ID does not exist" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.956360 5028 scope.go:117] "RemoveContainer" containerID="184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d" Nov 23 07:13:40 crc kubenswrapper[5028]: E1123 07:13:40.957002 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d\": container with ID starting with 184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d not found: ID does not exist" containerID="184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.957042 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d"} err="failed to get container status \"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d\": rpc error: code = NotFound desc = could not find container \"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d\": container with ID starting with 184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d not found: ID does not exist" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.957069 5028 scope.go:117] "RemoveContainer" containerID="d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.957356 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df"} err="failed to get container status \"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df\": rpc error: code = NotFound desc = could not find container \"d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df\": container with ID starting with d16e016cff661a8f58581a1d809cb908a465c3db9988626a12171e2538a142df not found: ID does not exist" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.957377 5028 scope.go:117] "RemoveContainer" containerID="184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.958331 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d"} err="failed to get container status \"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d\": rpc error: code = NotFound desc = could not find container \"184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d\": container with ID starting with 184ab2c820705ccb42a13f66587a9408e42a8dbd875848023cc0f8529c36c50d not found: ID does not exist" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.958358 5028 scope.go:117] "RemoveContainer" containerID="378f53f6920941038405cc29b0d2089fe43431d937244b33f18351586a7ec8e8" Nov 23 07:13:40 crc kubenswrapper[5028]: I1123 07:13:40.997080 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "eaa89bc7-a850-429e-a0ef-c5f2906b0d18" (UID: "eaa89bc7-a850-429e-a0ef-c5f2906b0d18"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.003307 5028 scope.go:117] "RemoveContainer" containerID="22f3b63ee8932af549b2d73e2cdf2df0d790f7927e67756a4ac6bf3630f5b56b" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.008164 5028 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.008190 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.008199 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.012498 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e2100d9d-d4e3-40aa-8082-e6536e2ed096" (UID: "e2100d9d-d4e3-40aa-8082-e6536e2ed096"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.034773 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "02a942e1-e2f6-45ea-829d-70d45cca4860" (UID: "02a942e1-e2f6-45ea-829d-70d45cca4860"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.049107 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-config" (OuterVolumeSpecName: "config") pod "02a942e1-e2f6-45ea-829d-70d45cca4860" (UID: "02a942e1-e2f6-45ea-829d-70d45cca4860"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.065317 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "02a942e1-e2f6-45ea-829d-70d45cca4860" (UID: "02a942e1-e2f6-45ea-829d-70d45cca4860"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.067051 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373db2ec-bd55-424f-bf32-41e7107d8102" path="/var/lib/kubelet/pods/373db2ec-bd55-424f-bf32-41e7107d8102/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.068053 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" path="/var/lib/kubelet/pods/595ec560-1f5a-44f8-bf67-feee6223a090/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.068835 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e0a689-cd74-4797-9d4f-8647ec86df48" path="/var/lib/kubelet/pods/78e0a689-cd74-4797-9d4f-8647ec86df48/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.069961 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2670a19-fe04-4055-905d-f9a6f8d8b0b3" path="/var/lib/kubelet/pods/a2670a19-fe04-4055-905d-f9a6f8d8b0b3/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.070878 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" path="/var/lib/kubelet/pods/b05065f0-2269-4a88-abdf-45d2523ac60b/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.071586 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2100d9d-d4e3-40aa-8082-e6536e2ed096" path="/var/lib/kubelet/pods/e2100d9d-d4e3-40aa-8082-e6536e2ed096/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.072827 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f844edf7-0721-4de7-a55d-615ede8fa93a" path="/var/lib/kubelet/pods/f844edf7-0721-4de7-a55d-615ede8fa93a/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.073423 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff3de4d5-2fff-47f3-b769-5f1db4973efd" path="/var/lib/kubelet/pods/ff3de4d5-2fff-47f3-b769-5f1db4973efd/volumes" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.080235 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.086325 5028 scope.go:117] "RemoveContainer" containerID="6b9485351f5f89d9f30b3b6cc76299c0d67e1503150abd9b28c76a0524e4ef39" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.118911 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "02a942e1-e2f6-45ea-829d-70d45cca4860" (UID: "02a942e1-e2f6-45ea-829d-70d45cca4860"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.121415 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.121444 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.121461 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2100d9d-d4e3-40aa-8082-e6536e2ed096-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.121470 5028 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.121479 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.121490 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.138939 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.144652 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "02a942e1-e2f6-45ea-829d-70d45cca4860" (UID: "02a942e1-e2f6-45ea-829d-70d45cca4860"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.166228 5028 scope.go:117] "RemoveContainer" containerID="26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.166592 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "eaa89bc7-a850-429e-a0ef-c5f2906b0d18" (UID: "eaa89bc7-a850-429e-a0ef-c5f2906b0d18"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.177239 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder0e8f-account-delete-xlplw"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.183940 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data" (OuterVolumeSpecName: "config-data") pod "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" (UID: "794e1c4d-3639-4b06-9a8b-5597fe8fa4c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.203510 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement31f4-account-delete-9dth6"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222082 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222140 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-combined-ca-bundle\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222217 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-default\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222292 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-generated\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222333 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q78l2\" (UniqueName: \"kubernetes.io/projected/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kube-api-access-q78l2\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222480 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kolla-config\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222513 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-operator-scripts\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.222542 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-galera-tls-certs\") pod \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\" (UID: \"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3\") " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.223106 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/02a942e1-e2f6-45ea-829d-70d45cca4860-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.223126 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.223137 5028 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa89bc7-a850-429e-a0ef-c5f2906b0d18-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.223791 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.224364 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.224663 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.227122 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kube-api-access-q78l2" (OuterVolumeSpecName: "kube-api-access-q78l2") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "kube-api-access-q78l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.228836 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.231049 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.233118 5028 scope.go:117] "RemoveContainer" containerID="26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a" Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.233972 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a\": container with ID starting with 26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a not found: ID does not exist" containerID="26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.234008 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a"} err="failed to get container status \"26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a\": rpc error: code = NotFound desc = could not find container \"26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a\": container with ID starting with 26a46c7f6d35d8a09100f443f9676be8de4cf283d57d85a2f2fd5807b97a551a not found: ID does not exist" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.235063 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.238843 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.252394 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.293964 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" (UID: "de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324440 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324464 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q78l2\" (UniqueName: \"kubernetes.io/projected/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kube-api-access-q78l2\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324473 5028 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324482 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324490 5028 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324498 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324519 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.324528 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.325032 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.325277 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data podName:8399afb1-fbd2-4ce0-b980-46b317d6cfee nodeName:}" failed. No retries permitted until 2025-11-23 07:13:45.32522967 +0000 UTC m=+1409.022634449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data") pod "rabbitmq-server-0" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee") : configmap "rabbitmq-config-data" not found Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.327065 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance8cc3-account-delete-rmsjp"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.348026 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.374805 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican5527-account-delete-7gzrg"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.427163 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.532522 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell08b1e-account-delete-qctpn"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.625974 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi35e8-account-delete-8zqfb"] Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.734364 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.734401 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.734411 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.734424 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.734483 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:45.734466254 +0000 UTC m=+1409.431871033 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.908054 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.908267 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement31f4-account-delete-9dth6" event={"ID":"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31","Type":"ContainerStarted","Data":"63b8e89fb1de2fc338f25d77ae5a049681cdad5b98161e7cec9b04e9d722fa94"} Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.908305 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement31f4-account-delete-9dth6" event={"ID":"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31","Type":"ContainerStarted","Data":"c2002946611372b709f78d1ed62680c81900af1f93edce97a316f81a8f05ec30"} Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.908332 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-central-agent" containerID="cri-o://dde5c69666e70bb8f75a7ed7a789c3ae7acdc1f1acf508ccc160a39ef016782c" gracePeriod=30 Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.909499 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="sg-core" containerID="cri-o://fb676e1b2e2c02ba89cd38ce4f10b302e56efd80c670bac63573599363bc4fb7" gracePeriod=30 Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.909595 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="proxy-httpd" containerID="cri-o://66ffc82e2a92f52c482cb6d579b807e3af34155790eb820d2d16eecf464cd86d" gracePeriod=30 Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.909659 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-notification-agent" containerID="cri-o://18362900b5e9f33833027654ded3247f26684450ed496934014046dcf552169b" gracePeriod=30 Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.920260 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.924186 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.925665 5028 generic.go:334] "Generic (PLEG): container finished" podID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerID="2a139570e27e4d3f8e409cd6102e980a7101788faa70c851a25210cd2a440e17" exitCode=0 Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.925772 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b69c59d9-28nfd" event={"ID":"230d8024-5d83-4742-9bf9-77bc956dd4a9","Type":"ContainerDied","Data":"2a139570e27e4d3f8e409cd6102e980a7101788faa70c851a25210cd2a440e17"} Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.925799 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-77b69c59d9-28nfd" event={"ID":"230d8024-5d83-4742-9bf9-77bc956dd4a9","Type":"ContainerDied","Data":"0ad4b7a0e7d2ec73592ccb88243a33a7bd7a54a36aedb7e571bfc89d8829f552"} Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.925809 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad4b7a0e7d2ec73592ccb88243a33a7bd7a54a36aedb7e571bfc89d8829f552" Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.928059 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:41 crc kubenswrapper[5028]: E1123 07:13:41.928104 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" containerName="nova-cell0-conductor-conductor" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.938358 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.953121 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.954231 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9ad676ea-95d4-483f-a7d7-574744376b19" containerName="kube-state-metrics" containerID="cri-o://efc762443a33b269b97ec4b3cde54d3c8c727b78e4871cb2e7039c7badde7203" gracePeriod=30 Nov 23 07:13:41 crc kubenswrapper[5028]: I1123 07:13:41.964408 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8cc3-account-delete-rmsjp" event={"ID":"261d60cc-ee7d-463e-add8-4a4e8af392cd","Type":"ContainerStarted","Data":"829814c5bd791c3940f9832e644ed882bf2d481d27976ce5d5c9a7099667130b"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:41.997862 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement31f4-account-delete-9dth6" podStartSLOduration=4.997845492 podStartE2EDuration="4.997845492s" podCreationTimestamp="2025-11-23 07:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:41.969563831 +0000 UTC m=+1405.666968610" watchObservedRunningTime="2025-11-23 07:13:41.997845492 +0000 UTC m=+1405.695250271" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.016157 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0e8f-account-delete-xlplw" event={"ID":"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd","Type":"ContainerStarted","Data":"e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.016197 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0e8f-account-delete-xlplw" event={"ID":"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd","Type":"ContainerStarted","Data":"dfe4eb387e3cd9102db4e0f498f40426d3e2ce26235f8e1766f9a8a01510eb54"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.026650 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3","Type":"ContainerDied","Data":"006ab7ab67f29786ec3083302aa1080c06b96aed92b427f6b754d6543f3a78d1"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.026700 5028 scope.go:117] "RemoveContainer" containerID="def0d1a5d1d2c89fcd962a46215131a3c686adec6de8fc5a5ece7cb87528bfac" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.027085 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039264 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-internal-tls-certs\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039302 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-run-httpd\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039427 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-config-data\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039442 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-public-tls-certs\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039459 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-log-httpd\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039479 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l488c\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-kube-api-access-l488c\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039499 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-etc-swift\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.039545 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-combined-ca-bundle\") pod \"230d8024-5d83-4742-9bf9-77bc956dd4a9\" (UID: \"230d8024-5d83-4742-9bf9-77bc956dd4a9\") " Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.041173 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.041989 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.070694 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi35e8-account-delete-8zqfb" event={"ID":"c83094db-b0cb-4be4-a13b-de12d76e1fb0","Type":"ContainerStarted","Data":"059b3d43d4834c327f5fc100298a2a88f412d2995e26f61f3c2481531d97d98a"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.104017 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-kube-api-access-l488c" (OuterVolumeSpecName: "kube-api-access-l488c") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "kube-api-access-l488c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.117982 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5527-account-delete-7gzrg" event={"ID":"a00e6664-ab67-4532-9c12-89c6fa223993","Type":"ContainerStarted","Data":"44abe3e25366bc6d2873e8645c61f50dc35f50c5053651c718e28b2667fbf09d"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.120214 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.120642 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.141028 5028 scope.go:117] "RemoveContainer" containerID="5598b842a936be170ee1be87aa71ee10dbeff505eeb40e596a071e95109e33e5" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.141498 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.141523 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l488c\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-kube-api-access-l488c\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.141532 5028 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/230d8024-5d83-4742-9bf9-77bc956dd4a9-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.142795 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/230d8024-5d83-4742-9bf9-77bc956dd4a9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.142608 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.142981 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:42.642933939 +0000 UTC m=+1406.340338718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.167767 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell08b1e-account-delete-qctpn" event={"ID":"75ea620d-8ec5-47db-a758-11a1e3c9d605","Type":"ContainerStarted","Data":"ab960e602ff9a88d7cef581f8763cc60e1d2d310c671fab15a3bf9e6b1058457"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.177172 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.177382 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" containerName="memcached" containerID="cri-o://5f2108c80300b01fc1d30c52fcb9398b908685239d7a2e69ca2f22d1daf75c65" gracePeriod=30 Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.189423 5028 generic.go:334] "Generic (PLEG): container finished" podID="bf69949e-5b98-4cb2-9ce6-f979da44c58e" containerID="60d5dfda719cca7c70347469f4adf01b003d4790534774d7508b2cbaec093460" exitCode=0 Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.189548 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55bfb77665-zk585" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.189866 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron8471-account-delete-6lm46" event={"ID":"bf69949e-5b98-4cb2-9ce6-f979da44c58e","Type":"ContainerDied","Data":"60d5dfda719cca7c70347469f4adf01b003d4790534774d7508b2cbaec093460"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.189911 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron8471-account-delete-6lm46" event={"ID":"bf69949e-5b98-4cb2-9ce6-f979da44c58e","Type":"ContainerStarted","Data":"d27f8531e27348ef0f7b4d39e153a1e2cbc0bf1997388932f97d3b542fa626cc"} Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.203512 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder0e8f-account-delete-xlplw" podStartSLOduration=5.203490699 podStartE2EDuration="5.203490699s" podCreationTimestamp="2025-11-23 07:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:42.038546417 +0000 UTC m=+1405.735951196" watchObservedRunningTime="2025-11-23 07:13:42.203490699 +0000 UTC m=+1405.900895478" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.224045 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.244046 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.252918 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xw8q6"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.256846 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.164:8776/healthcheck\": read tcp 10.217.0.2:50382->10.217.0.164:8776: read: connection reset by peer" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.266237 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-79f64857b-ngrdb"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.266679 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-79f64857b-ngrdb" podUID="f5f67379-72d4-46f4-844f-b00c8f912169" containerName="keystone-api" containerID="cri-o://0825116d750ee80fd532b8320d8c85f3b4fe208475c0c2c3203bb0ac33a3586a" gracePeriod=30 Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.278915 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xw8q6"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.285698 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xf95z"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.293253 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xf95z"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.299919 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystoneeb1f-account-delete-fxd29"] Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300373 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="ovsdbserver-sb" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300390 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="ovsdbserver-sb" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300406 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-httpd" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300413 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-httpd" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300421 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-server" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300428 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-server" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300439 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300445 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300462 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerName="dnsmasq-dns" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300468 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerName="dnsmasq-dns" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300476 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerName="galera" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300484 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerName="galera" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300498 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerName="init" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300507 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerName="init" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300516 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="ovsdbserver-nb" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300522 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="ovsdbserver-nb" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300532 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2670a19-fe04-4055-905d-f9a6f8d8b0b3" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300539 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2670a19-fe04-4055-905d-f9a6f8d8b0b3" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300547 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="probe" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300552 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="probe" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300564 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="cinder-scheduler" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300569 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="cinder-scheduler" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300578 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300584 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300599 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerName="mysql-bootstrap" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300605 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerName="mysql-bootstrap" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.300616 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300623 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300799 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" containerName="galera" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300812 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300822 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="cinder-scheduler" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300831 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="ovsdbserver-nb" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300843 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-server" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300854 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2670a19-fe04-4055-905d-f9a6f8d8b0b3" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300867 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-httpd" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300875 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="ovsdbserver-sb" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300885 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" containerName="dnsmasq-dns" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300894 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" containerName="probe" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300903 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="595ec560-1f5a-44f8-bf67-feee6223a090" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.300913 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05065f0-2269-4a88-abdf-45d2523ac60b" containerName="openstack-network-exporter" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.301508 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.307264 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystoneeb1f-account-delete-fxd29"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.320118 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.325774 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-fxq6t"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.337012 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-fxq6t"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.356099 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystoneeb1f-account-delete-fxd29"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.373517 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-eb1f-account-create-2qccz"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.381208 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-eb1f-account-create-2qccz"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.396362 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55bfb77665-zk585"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.402303 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55bfb77665-zk585"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.408177 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.417203 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.431015 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-c952v"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.453429 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tw7j\" (UniqueName: \"kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.454794 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.467879 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-c952v"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.479743 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d87c9f496-cstmz" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:35018->10.217.0.160:9311: read: connection reset by peer" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.480894 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7d87c9f496-cstmz" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:35022->10.217.0.160:9311: read: connection reset by peer" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.497063 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5527-account-create-swksf"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.504152 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-config-data" (OuterVolumeSpecName: "config-data") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.523763 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican5527-account-delete-7gzrg"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.535433 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5527-account-create-swksf"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.550407 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lp7d2"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.557840 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.557992 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tw7j\" (UniqueName: \"kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.558032 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.558064 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.558473 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts podName:841776d0-3a86-46aa-9b13-86a9060620d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.058398095 +0000 UTC m=+1406.755802874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts") pod "keystoneeb1f-account-delete-fxd29" (UID: "841776d0-3a86-46aa-9b13-86a9060620d7") : configmap "openstack-scripts" not found Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.566738 5028 projected.go:194] Error preparing data for projected volume kube-api-access-5tw7j for pod openstack/keystoneeb1f-account-delete-fxd29: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.566824 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j podName:841776d0-3a86-46aa-9b13-86a9060620d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.066808311 +0000 UTC m=+1406.764213090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5tw7j" (UniqueName: "kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j") pod "keystoneeb1f-account-delete-fxd29" (UID: "841776d0-3a86-46aa-9b13-86a9060620d7") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.611077 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lp7d2"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.638122 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder0e8f-account-delete-xlplw"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.641409 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.662932 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.662987 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:42 crc kubenswrapper[5028]: E1123 07:13:42.663323 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.66330505 +0000 UTC m=+1407.360709829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.698323 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0e8f-account-create-bv7pl"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.721007 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0e8f-account-create-bv7pl"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.747914 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-tgkzz"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.760775 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-tgkzz"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.792129 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-31f4-account-create-nm5fp"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.806778 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement31f4-account-delete-9dth6"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.816094 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.816167 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-31f4-account-create-nm5fp"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.828839 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "230d8024-5d83-4742-9bf9-77bc956dd4a9" (UID: "230d8024-5d83-4742-9bf9-77bc956dd4a9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.844018 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-snkp6"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.856058 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-snkp6"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.867349 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.867382 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/230d8024-5d83-4742-9bf9-77bc956dd4a9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.878075 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8cc3-account-create-kq8vp"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.883702 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance8cc3-account-delete-rmsjp"] Nov 23 07:13:42 crc kubenswrapper[5028]: I1123 07:13:42.896692 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8cc3-account-create-kq8vp"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.042890 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xz4c6"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.071358 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tw7j\" (UniqueName: \"kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.071482 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.071608 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.071653 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts podName:841776d0-3a86-46aa-9b13-86a9060620d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:44.071640002 +0000 UTC m=+1407.769044781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts") pod "keystoneeb1f-account-delete-fxd29" (UID: "841776d0-3a86-46aa-9b13-86a9060620d7") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.079110 5028 projected.go:194] Error preparing data for projected volume kube-api-access-5tw7j for pod openstack/keystoneeb1f-account-delete-fxd29: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.079181 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j podName:841776d0-3a86-46aa-9b13-86a9060620d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:44.079160015 +0000 UTC m=+1407.776564794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5tw7j" (UniqueName: "kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j") pod "keystoneeb1f-account-delete-fxd29" (UID: "841776d0-3a86-46aa-9b13-86a9060620d7") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.092689 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a942e1-e2f6-45ea-829d-70d45cca4860" path="/var/lib/kubelet/pods/02a942e1-e2f6-45ea-829d-70d45cca4860/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.093329 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f60db75-6beb-411d-afad-7841174fbf40" path="/var/lib/kubelet/pods/1f60db75-6beb-411d-afad-7841174fbf40/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.093846 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fe660d5-bccb-427c-8e24-ee10b19d38cb" path="/var/lib/kubelet/pods/2fe660d5-bccb-427c-8e24-ee10b19d38cb/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.097476 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e88bac4-6433-4c6a-a36f-433aec6c760c" path="/var/lib/kubelet/pods/4e88bac4-6433-4c6a-a36f-433aec6c760c/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.098378 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551aa7b7-8791-467e-9d61-0061389e8095" path="/var/lib/kubelet/pods/551aa7b7-8791-467e-9d61-0061389e8095/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.098889 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa758fb-409d-4900-91b5-a424479b614e" path="/var/lib/kubelet/pods/6aa758fb-409d-4900-91b5-a424479b614e/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.099423 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740e0b0c-f37c-4acf-8b98-847b26213c28" path="/var/lib/kubelet/pods/740e0b0c-f37c-4acf-8b98-847b26213c28/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.102463 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794e1c4d-3639-4b06-9a8b-5597fe8fa4c4" path="/var/lib/kubelet/pods/794e1c4d-3639-4b06-9a8b-5597fe8fa4c4/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.103217 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2c10f1-db6c-432e-a8d5-f695179ecd2f" path="/var/lib/kubelet/pods/9d2c10f1-db6c-432e-a8d5-f695179ecd2f/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.108450 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0bfc63d-fbf6-48be-b850-a3370894112b" path="/var/lib/kubelet/pods/a0bfc63d-fbf6-48be-b850-a3370894112b/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.109000 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51c114c-7132-45e8-9e6e-ec0c783ede0f" path="/var/lib/kubelet/pods/b51c114c-7132-45e8-9e6e-ec0c783ede0f/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.112338 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9255961-4bb2-4ffc-af3d-9fd8998c59d6" path="/var/lib/kubelet/pods/b9255961-4bb2-4ffc-af3d-9fd8998c59d6/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.117126 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3" path="/var/lib/kubelet/pods/de8b5e2a-0ac8-4e2e-825d-81ba9c3942e3/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.118159 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6621763-252c-443e-9049-5d13e231e916" path="/var/lib/kubelet/pods/e6621763-252c-443e-9049-5d13e231e916/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.118792 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa89bc7-a850-429e-a0ef-c5f2906b0d18" path="/var/lib/kubelet/pods/eaa89bc7-a850-429e-a0ef-c5f2906b0d18/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.125195 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc4e92e6-843f-4963-b659-fe67d1c71c8b" path="/var/lib/kubelet/pods/fc4e92e6-843f-4963-b659-fe67d1c71c8b/volumes" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.127932 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xz4c6"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.150141 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell08b1e-account-delete-qctpn"] Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.157344 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.179761 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.184747 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.184814 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="ovn-northd" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.190548 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8b1e-account-create-mq5tg"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.198303 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="galera" containerID="cri-o://241f9f536ab38394348af160013fb0390313172f6bde249dc5d6d02b4ba10fb4" gracePeriod=30 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.217575 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8b1e-account-create-mq5tg"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.232206 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kqp69"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.232246 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kqp69"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.239199 5028 generic.go:334] "Generic (PLEG): container finished" podID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerID="5c6c526d79aa0e5f2a9c03d7440a3625e79fc7e6164cb907a3f55aad201ead50" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.239241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4c494dd6-rn255" event={"ID":"2a3fd963-7f73-4069-a993-1dfec6751c57","Type":"ContainerDied","Data":"5c6c526d79aa0e5f2a9c03d7440a3625e79fc7e6164cb907a3f55aad201ead50"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.239291 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4c494dd6-rn255" event={"ID":"2a3fd963-7f73-4069-a993-1dfec6751c57","Type":"ContainerDied","Data":"e36d88e83f9befa3fd088d496b13719b56f9915c234c134b23b30e8098107046"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.239306 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36d88e83f9befa3fd088d496b13719b56f9915c234c134b23b30e8098107046" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.244422 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-35e8-account-create-ph7rt"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.247335 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi35e8-account-delete-8zqfb"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.252695 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-35e8-account-create-ph7rt"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.253602 5028 generic.go:334] "Generic (PLEG): container finished" podID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerID="ea5fcb780cb3db6a3d6792d1be448395cc967da3e44687458ed85dea64130699" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.253670 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c8549d57-5f4m7" event={"ID":"83c6472a-6c69-45ff-b0c2-3bf66e0523f2","Type":"ContainerDied","Data":"ea5fcb780cb3db6a3d6792d1be448395cc967da3e44687458ed85dea64130699"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.263886 5028 generic.go:334] "Generic (PLEG): container finished" podID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerID="b98edfeee83e098dad0a822a2d765cd74a15be4763ca4b5c2fbb0a7a9fda7f9f" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.263981 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a92efa4-12b6-431d-8aa7-9baa545f7e07","Type":"ContainerDied","Data":"b98edfeee83e098dad0a822a2d765cd74a15be4763ca4b5c2fbb0a7a9fda7f9f"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.289797 5028 generic.go:334] "Generic (PLEG): container finished" podID="b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" containerID="5f2108c80300b01fc1d30c52fcb9398b908685239d7a2e69ca2f22d1daf75c65" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.289871 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50","Type":"ContainerDied","Data":"5f2108c80300b01fc1d30c52fcb9398b908685239d7a2e69ca2f22d1daf75c65"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.294304 5028 generic.go:334] "Generic (PLEG): container finished" podID="023257e8-ab54-4423-94bc-1f8d547afa69" containerID="0eafcd1a07324ac9778cdfa4b78db65ef912e1e1d8dddb571f38dfd760d9566d" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.294383 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"023257e8-ab54-4423-94bc-1f8d547afa69","Type":"ContainerDied","Data":"0eafcd1a07324ac9778cdfa4b78db65ef912e1e1d8dddb571f38dfd760d9566d"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.295839 5028 generic.go:334] "Generic (PLEG): container finished" podID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerID="5dc8cea2a082f78c34b4f793fb541c20becf1183832ccdc56cdb5c470fec475a" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.295880 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab","Type":"ContainerDied","Data":"5dc8cea2a082f78c34b4f793fb541c20becf1183832ccdc56cdb5c470fec475a"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.295895 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab","Type":"ContainerDied","Data":"8c1c06272ffce2f8fa6d5f9459b161d0f84f0b98e6629af52efa09c5832a0791"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.295920 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c1c06272ffce2f8fa6d5f9459b161d0f84f0b98e6629af52efa09c5832a0791" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.296969 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5527-account-delete-7gzrg" event={"ID":"a00e6664-ab67-4532-9c12-89c6fa223993","Type":"ContainerStarted","Data":"9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.297490 5028 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican5527-account-delete-7gzrg" secret="" err="secret \"galera-openstack-dockercfg-rr7xq\" not found" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.319567 5028 generic.go:334] "Generic (PLEG): container finished" podID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerID="e31414b733266ebd168e3f95a8474d2ad5c2d2b753b7b52893516e95d7e66b97" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.319635 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" event={"ID":"534edd12-5e24-4d56-9f1a-944e1ed4a65b","Type":"ContainerDied","Data":"e31414b733266ebd168e3f95a8474d2ad5c2d2b753b7b52893516e95d7e66b97"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.320451 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican5527-account-delete-7gzrg" podStartSLOduration=6.320433393 podStartE2EDuration="6.320433393s" podCreationTimestamp="2025-11-23 07:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:43.317610224 +0000 UTC m=+1407.015015003" watchObservedRunningTime="2025-11-23 07:13:43.320433393 +0000 UTC m=+1407.017838172" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.323441 5028 generic.go:334] "Generic (PLEG): container finished" podID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerID="66ffc82e2a92f52c482cb6d579b807e3af34155790eb820d2d16eecf464cd86d" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.323467 5028 generic.go:334] "Generic (PLEG): container finished" podID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerID="fb676e1b2e2c02ba89cd38ce4f10b302e56efd80c670bac63573599363bc4fb7" exitCode=2 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.323477 5028 generic.go:334] "Generic (PLEG): container finished" podID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerID="dde5c69666e70bb8f75a7ed7a789c3ae7acdc1f1acf508ccc160a39ef016782c" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.323514 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerDied","Data":"66ffc82e2a92f52c482cb6d579b807e3af34155790eb820d2d16eecf464cd86d"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.323534 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerDied","Data":"fb676e1b2e2c02ba89cd38ce4f10b302e56efd80c670bac63573599363bc4fb7"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.323548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerDied","Data":"dde5c69666e70bb8f75a7ed7a789c3ae7acdc1f1acf508ccc160a39ef016782c"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.329301 5028 generic.go:334] "Generic (PLEG): container finished" podID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerID="f479f9aaf79c6e583bdbd977ec74f93da1e58b297f19fd2858b002f4f930227c" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.329337 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d87c9f496-cstmz" event={"ID":"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c","Type":"ContainerDied","Data":"f479f9aaf79c6e583bdbd977ec74f93da1e58b297f19fd2858b002f4f930227c"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.332751 5028 generic.go:334] "Generic (PLEG): container finished" podID="fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" containerID="63b8e89fb1de2fc338f25d77ae5a049681cdad5b98161e7cec9b04e9d722fa94" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.332822 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement31f4-account-delete-9dth6" event={"ID":"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31","Type":"ContainerDied","Data":"63b8e89fb1de2fc338f25d77ae5a049681cdad5b98161e7cec9b04e9d722fa94"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.336079 5028 generic.go:334] "Generic (PLEG): container finished" podID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerID="ac4ead67e47260cf3f74b78ab8afc2ccc45af0107cfaafca7edd1336fddcee80" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.336172 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9cf78ba6-9116-42cf-8be5-809dd912646c","Type":"ContainerDied","Data":"ac4ead67e47260cf3f74b78ab8afc2ccc45af0107cfaafca7edd1336fddcee80"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.343846 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8cc3-account-delete-rmsjp" event={"ID":"261d60cc-ee7d-463e-add8-4a4e8af392cd","Type":"ContainerStarted","Data":"cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.344281 5028 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance8cc3-account-delete-rmsjp" secret="" err="secret \"galera-openstack-dockercfg-rr7xq\" not found" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.363780 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell08b1e-account-delete-qctpn" event={"ID":"75ea620d-8ec5-47db-a758-11a1e3c9d605","Type":"ContainerStarted","Data":"c0e438b5feadcc60de0125b6ead3243fc9fdba982d8c527a3af16187c90ff94e"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.364341 5028 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell08b1e-account-delete-qctpn" secret="" err="secret \"galera-openstack-dockercfg-rr7xq\" not found" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.369824 5028 generic.go:334] "Generic (PLEG): container finished" podID="9ad676ea-95d4-483f-a7d7-574744376b19" containerID="efc762443a33b269b97ec4b3cde54d3c8c727b78e4871cb2e7039c7badde7203" exitCode=2 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.369896 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ad676ea-95d4-483f-a7d7-574744376b19","Type":"ContainerDied","Data":"efc762443a33b269b97ec4b3cde54d3c8c727b78e4871cb2e7039c7badde7203"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.374140 5028 generic.go:334] "Generic (PLEG): container finished" podID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" containerID="718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.374194 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"daff2282-19f8-48c7-8d1b-780fbe97ec5a","Type":"ContainerDied","Data":"718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725"} Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.380214 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.380275 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts podName:a00e6664-ab67-4532-9c12-89c6fa223993 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.880258766 +0000 UTC m=+1407.577663545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts") pod "barbican5527-account-delete-7gzrg" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.380598 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi35e8-account-delete-8zqfb" event={"ID":"c83094db-b0cb-4be4-a13b-de12d76e1fb0","Type":"ContainerStarted","Data":"72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.381575 5028 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi35e8-account-delete-8zqfb" secret="" err="secret \"galera-openstack-dockercfg-rr7xq\" not found" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.394218 5028 generic.go:334] "Generic (PLEG): container finished" podID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerID="4217f079f64973f5810531f58f78f10d09ef89a5b6288de55515155677e95e0a" exitCode=0 Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.394338 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d35d26e-c3f7-4597-80c6-60358f2d2c21","Type":"ContainerDied","Data":"4217f079f64973f5810531f58f78f10d09ef89a5b6288de55515155677e95e0a"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.394375 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-77b69c59d9-28nfd" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.394376 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d35d26e-c3f7-4597-80c6-60358f2d2c21","Type":"ContainerDied","Data":"b9799ec3d2cd4efb9c885be833eb07aea0c04d7fe621cef5862759b070903940"} Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.394497 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9799ec3d2cd4efb9c885be833eb07aea0c04d7fe621cef5862759b070903940" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.395402 5028 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder0e8f-account-delete-xlplw" secret="" err="secret \"galera-openstack-dockercfg-rr7xq\" not found" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.402299 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance8cc3-account-delete-rmsjp" podStartSLOduration=6.402279334 podStartE2EDuration="6.402279334s" podCreationTimestamp="2025-11-23 07:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:43.369324069 +0000 UTC m=+1407.066728858" watchObservedRunningTime="2025-11-23 07:13:43.402279334 +0000 UTC m=+1407.099684113" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.427764 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell08b1e-account-delete-qctpn" podStartSLOduration=5.427745247 podStartE2EDuration="5.427745247s" podCreationTimestamp="2025-11-23 07:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:43.382153422 +0000 UTC m=+1407.079558201" watchObservedRunningTime="2025-11-23 07:13:43.427745247 +0000 UTC m=+1407.125150026" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.447445 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi35e8-account-delete-8zqfb" podStartSLOduration=5.447425938 podStartE2EDuration="5.447425938s" podCreationTimestamp="2025-11-23 07:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:13:43.394310219 +0000 UTC m=+1407.091714998" watchObservedRunningTime="2025-11-23 07:13:43.447425938 +0000 UTC m=+1407.144830717" Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.486009 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.486060 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts podName:75ea620d-8ec5-47db-a758-11a1e3c9d605 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.986045572 +0000 UTC m=+1407.683450341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts") pod "novacell08b1e-account-delete-qctpn" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.486308 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.486334 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts podName:c83094db-b0cb-4be4-a13b-de12d76e1fb0 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.986327489 +0000 UTC m=+1407.683732268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts") pod "novaapi35e8-account-delete-8zqfb" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.488144 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.488218 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts podName:261d60cc-ee7d-463e-add8-4a4e8af392cd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:43.988200275 +0000 UTC m=+1407.685605054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts") pod "glance8cc3-account-delete-rmsjp" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.694070 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.694162 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:45.694141199 +0000 UTC m=+1409.391546048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.704760 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.198:3000/\": dial tcp 10.217.0.198:3000: connect: connection refused" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.879938 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-77b69c59d9-28nfd"] Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.886357 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-5tw7j operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystoneeb1f-account-delete-fxd29" podUID="841776d0-3a86-46aa-9b13-86a9060620d7" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.890663 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-77b69c59d9-28nfd"] Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.892354 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.901255 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: E1123 07:13:43.901313 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts podName:a00e6664-ab67-4532-9c12-89c6fa223993 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:44.901299193 +0000 UTC m=+1408.598703972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts") pod "barbican5527-account-delete-7gzrg" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993") : configmap "openstack-scripts" not found Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.905571 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.908525 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.965369 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.982458 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:13:43 crc kubenswrapper[5028]: I1123 07:13:43.985267 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005046 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-config-data\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005118 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-internal-tls-certs\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005181 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-logs\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005199 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data\") pod \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005222 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh7x9\" (UniqueName: \"kubernetes.io/projected/2a3fd963-7f73-4069-a993-1dfec6751c57-kube-api-access-dh7x9\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005261 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-config-data\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005284 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tspc8\" (UniqueName: \"kubernetes.io/projected/7d35d26e-c3f7-4597-80c6-60358f2d2c21-kube-api-access-tspc8\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005299 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-combined-ca-bundle\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005316 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-logs\") pod \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005364 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-scripts\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005386 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-nova-metadata-tls-certs\") pod \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005421 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-internal-tls-certs\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005446 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a3fd963-7f73-4069-a993-1dfec6751c57-logs\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005468 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-combined-ca-bundle\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005487 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-public-tls-certs\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005516 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005548 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-httpd-run\") pod \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\" (UID: \"7d35d26e-c3f7-4597-80c6-60358f2d2c21\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005575 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-scripts\") pod \"2a3fd963-7f73-4069-a993-1dfec6751c57\" (UID: \"2a3fd963-7f73-4069-a993-1dfec6751c57\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005608 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcrfw\" (UniqueName: \"kubernetes.io/projected/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-kube-api-access-jcrfw\") pod \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.005629 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-combined-ca-bundle\") pod \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.006480 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.009390 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.009474 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.010700 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts podName:c83094db-b0cb-4be4-a13b-de12d76e1fb0 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:45.009505018 +0000 UTC m=+1408.706909807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts") pod "novaapi35e8-account-delete-8zqfb" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0") : configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.011068 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.011109 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts podName:261d60cc-ee7d-463e-add8-4a4e8af392cd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:45.011098947 +0000 UTC m=+1408.708503726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts") pod "glance8cc3-account-delete-rmsjp" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd") : configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.011144 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.011193 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts podName:75ea620d-8ec5-47db-a758-11a1e3c9d605 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:45.011184639 +0000 UTC m=+1408.708589418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts") pod "novacell08b1e-account-delete-qctpn" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605") : configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.013198 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.013774 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-kube-api-access-jcrfw" (OuterVolumeSpecName: "kube-api-access-jcrfw") pod "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" (UID: "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab"). InnerVolumeSpecName "kube-api-access-jcrfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.010140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a3fd963-7f73-4069-a993-1dfec6751c57-logs" (OuterVolumeSpecName: "logs") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.020159 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-logs" (OuterVolumeSpecName: "logs") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.020901 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-logs" (OuterVolumeSpecName: "logs") pod "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" (UID: "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.023584 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.024663 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-scripts" (OuterVolumeSpecName: "scripts") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.027229 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3fd963-7f73-4069-a993-1dfec6751c57-kube-api-access-dh7x9" (OuterVolumeSpecName: "kube-api-access-dh7x9") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "kube-api-access-dh7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.028130 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-scripts" (OuterVolumeSpecName: "scripts") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.044203 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.048261 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d35d26e-c3f7-4597-80c6-60358f2d2c21-kube-api-access-tspc8" (OuterVolumeSpecName: "kube-api-access-tspc8") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "kube-api-access-tspc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.083596 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.084144 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.102302 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.103517 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" (UID: "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.112355 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113050 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-config-data\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113148 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113213 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggmb\" (UniqueName: \"kubernetes.io/projected/023257e8-ab54-4423-94bc-1f8d547afa69-kube-api-access-cggmb\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113271 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flrdp\" (UniqueName: \"kubernetes.io/projected/9cf78ba6-9116-42cf-8be5-809dd912646c-kube-api-access-flrdp\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113345 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-logs\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113375 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-certs\") pod \"9ad676ea-95d4-483f-a7d7-574744376b19\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113428 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-combined-ca-bundle\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113454 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data-custom\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113480 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-logs\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113499 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-combined-ca-bundle\") pod \"9ad676ea-95d4-483f-a7d7-574744376b19\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-combined-ca-bundle\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113559 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113582 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data-custom\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113604 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-scripts\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113637 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz5sh\" (UniqueName: \"kubernetes.io/projected/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-kube-api-access-mz5sh\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113685 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-internal-tls-certs\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113723 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/023257e8-ab54-4423-94bc-1f8d547afa69-logs\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113750 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-combined-ca-bundle\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113772 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-public-tls-certs\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113792 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-scripts\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113831 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-internal-tls-certs\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113869 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-public-tls-certs\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.113893 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/023257e8-ab54-4423-94bc-1f8d547afa69-etc-machine-id\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114014 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-httpd-run\") pod \"9cf78ba6-9116-42cf-8be5-809dd912646c\" (UID: \"9cf78ba6-9116-42cf-8be5-809dd912646c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114044 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mfw5\" (UniqueName: \"kubernetes.io/projected/9ad676ea-95d4-483f-a7d7-574744376b19-kube-api-access-4mfw5\") pod \"9ad676ea-95d4-483f-a7d7-574744376b19\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114081 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-config\") pod \"9ad676ea-95d4-483f-a7d7-574744376b19\" (UID: \"9ad676ea-95d4-483f-a7d7-574744376b19\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114109 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-public-tls-certs\") pod \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\" (UID: \"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114129 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data\") pod \"023257e8-ab54-4423-94bc-1f8d547afa69\" (UID: \"023257e8-ab54-4423-94bc-1f8d547afa69\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114372 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114569 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tw7j\" (UniqueName: \"kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j\") pod \"keystoneeb1f-account-delete-fxd29\" (UID: \"841776d0-3a86-46aa-9b13-86a9060620d7\") " pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114844 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114862 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcrfw\" (UniqueName: \"kubernetes.io/projected/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-kube-api-access-jcrfw\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114876 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114887 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d35d26e-c3f7-4597-80c6-60358f2d2c21-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114898 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh7x9\" (UniqueName: \"kubernetes.io/projected/2a3fd963-7f73-4069-a993-1dfec6751c57-kube-api-access-dh7x9\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114912 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tspc8\" (UniqueName: \"kubernetes.io/projected/7d35d26e-c3f7-4597-80c6-60358f2d2c21-kube-api-access-tspc8\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114922 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114933 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114960 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a3fd963-7f73-4069-a993-1dfec6751c57-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.114986 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.124990 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/023257e8-ab54-4423-94bc-1f8d547afa69-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.126906 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.127935 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023257e8-ab54-4423-94bc-1f8d547afa69-logs" (OuterVolumeSpecName: "logs") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.128124 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.132236 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-logs" (OuterVolumeSpecName: "logs") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.133104 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.133172 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts podName:841776d0-3a86-46aa-9b13-86a9060620d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:46.13315202 +0000 UTC m=+1409.830556889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts") pod "keystoneeb1f-account-delete-fxd29" (UID: "841776d0-3a86-46aa-9b13-86a9060620d7") : configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.133645 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-logs" (OuterVolumeSpecName: "logs") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.134433 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.143020 5028 projected.go:194] Error preparing data for projected volume kube-api-access-5tw7j for pod openstack/keystoneeb1f-account-delete-fxd29: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.143545 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j podName:841776d0-3a86-46aa-9b13-86a9060620d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:46.143520793 +0000 UTC m=+1409.840925572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5tw7j" (UniqueName: "kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j") pod "keystoneeb1f-account-delete-fxd29" (UID: "841776d0-3a86-46aa-9b13-86a9060620d7") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.176193 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.176218 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-scripts" (OuterVolumeSpecName: "scripts") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.176191 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023257e8-ab54-4423-94bc-1f8d547afa69-kube-api-access-cggmb" (OuterVolumeSpecName: "kube-api-access-cggmb") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "kube-api-access-cggmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.176266 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-kube-api-access-mz5sh" (OuterVolumeSpecName: "kube-api-access-mz5sh") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "kube-api-access-mz5sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.176341 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad676ea-95d4-483f-a7d7-574744376b19-kube-api-access-4mfw5" (OuterVolumeSpecName: "kube-api-access-4mfw5") pod "9ad676ea-95d4-483f-a7d7-574744376b19" (UID: "9ad676ea-95d4-483f-a7d7-574744376b19"). InnerVolumeSpecName "kube-api-access-4mfw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.176375 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.177746 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.178406 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf78ba6-9116-42cf-8be5-809dd912646c-kube-api-access-flrdp" (OuterVolumeSpecName: "kube-api-access-flrdp") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "kube-api-access-flrdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.179257 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-scripts" (OuterVolumeSpecName: "scripts") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.198619 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-config-data" (OuterVolumeSpecName: "config-data") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218385 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-logs\") pod \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218496 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn2t4\" (UniqueName: \"kubernetes.io/projected/534edd12-5e24-4d56-9f1a-944e1ed4a65b-kube-api-access-qn2t4\") pod \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218563 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-combined-ca-bundle\") pod \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218732 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-combined-ca-bundle\") pod \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218802 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kolla-config\") pod \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218836 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534edd12-5e24-4d56-9f1a-944e1ed4a65b-logs\") pod \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218899 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nphlw\" (UniqueName: \"kubernetes.io/projected/3a92efa4-12b6-431d-8aa7-9baa545f7e07-kube-api-access-nphlw\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.218979 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-internal-tls-certs\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219068 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx9nv\" (UniqueName: \"kubernetes.io/projected/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kube-api-access-tx9nv\") pod \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219140 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-config-data\") pod \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219173 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data\") pod \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219257 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-combined-ca-bundle\") pod \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219329 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data-custom\") pod \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219354 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-config-data\") pod \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219422 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data-custom\") pod \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219498 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-config-data\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219567 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-public-tls-certs\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219651 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219723 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-memcached-tls-certs\") pod \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\" (UID: \"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219814 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf69949e-5b98-4cb2-9ce6-f979da44c58e-operator-scripts\") pod \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219840 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a92efa4-12b6-431d-8aa7-9baa545f7e07-logs\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219893 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqpjr\" (UniqueName: \"kubernetes.io/projected/bf69949e-5b98-4cb2-9ce6-f979da44c58e-kube-api-access-qqpjr\") pod \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\" (UID: \"bf69949e-5b98-4cb2-9ce6-f979da44c58e\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219915 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data\") pod \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\" (UID: \"534edd12-5e24-4d56-9f1a-944e1ed4a65b\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.219980 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9cq2\" (UniqueName: \"kubernetes.io/projected/daff2282-19f8-48c7-8d1b-780fbe97ec5a-kube-api-access-g9cq2\") pod \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.222343 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/534edd12-5e24-4d56-9f1a-944e1ed4a65b-logs" (OuterVolumeSpecName: "logs") pod "534edd12-5e24-4d56-9f1a-944e1ed4a65b" (UID: "534edd12-5e24-4d56-9f1a-944e1ed4a65b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.224591 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-combined-ca-bundle\") pod \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\" (UID: \"daff2282-19f8-48c7-8d1b-780fbe97ec5a\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.224640 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649db\" (UniqueName: \"kubernetes.io/projected/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-kube-api-access-649db\") pod \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\" (UID: \"83c6472a-6c69-45ff-b0c2-3bf66e0523f2\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225599 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225626 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225643 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225655 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225668 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz5sh\" (UniqueName: \"kubernetes.io/projected/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-kube-api-access-mz5sh\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225682 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225694 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/023257e8-ab54-4423-94bc-1f8d547afa69-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225706 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225718 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/023257e8-ab54-4423-94bc-1f8d547afa69-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225729 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9cf78ba6-9116-42cf-8be5-809dd912646c-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225742 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mfw5\" (UniqueName: \"kubernetes.io/projected/9ad676ea-95d4-483f-a7d7-574744376b19-kube-api-access-4mfw5\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225755 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/534edd12-5e24-4d56-9f1a-944e1ed4a65b-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225781 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225794 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cggmb\" (UniqueName: \"kubernetes.io/projected/023257e8-ab54-4423-94bc-1f8d547afa69-kube-api-access-cggmb\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225809 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flrdp\" (UniqueName: \"kubernetes.io/projected/9cf78ba6-9116-42cf-8be5-809dd912646c-kube-api-access-flrdp\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225823 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.225836 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.227557 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-config-data" (OuterVolumeSpecName: "config-data") pod "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" (UID: "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.228197 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-logs" (OuterVolumeSpecName: "logs") pod "83c6472a-6c69-45ff-b0c2-3bf66e0523f2" (UID: "83c6472a-6c69-45ff-b0c2-3bf66e0523f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.233520 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a92efa4-12b6-431d-8aa7-9baa545f7e07-logs" (OuterVolumeSpecName: "logs") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.234164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf69949e-5b98-4cb2-9ce6-f979da44c58e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf69949e-5b98-4cb2-9ce6-f979da44c58e" (UID: "bf69949e-5b98-4cb2-9ce6-f979da44c58e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.234343 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" (UID: "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.238600 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" (UID: "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.245253 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a92efa4-12b6-431d-8aa7-9baa545f7e07-kube-api-access-nphlw" (OuterVolumeSpecName: "kube-api-access-nphlw") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "kube-api-access-nphlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.245549 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83c6472a-6c69-45ff-b0c2-3bf66e0523f2" (UID: "83c6472a-6c69-45ff-b0c2-3bf66e0523f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.246671 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "534edd12-5e24-4d56-9f1a-944e1ed4a65b" (UID: "534edd12-5e24-4d56-9f1a-944e1ed4a65b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.246717 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534edd12-5e24-4d56-9f1a-944e1ed4a65b-kube-api-access-qn2t4" (OuterVolumeSpecName: "kube-api-access-qn2t4") pod "534edd12-5e24-4d56-9f1a-944e1ed4a65b" (UID: "534edd12-5e24-4d56-9f1a-944e1ed4a65b"). InnerVolumeSpecName "kube-api-access-qn2t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.247088 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kube-api-access-tx9nv" (OuterVolumeSpecName: "kube-api-access-tx9nv") pod "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" (UID: "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50"). InnerVolumeSpecName "kube-api-access-tx9nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.247384 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daff2282-19f8-48c7-8d1b-780fbe97ec5a-kube-api-access-g9cq2" (OuterVolumeSpecName: "kube-api-access-g9cq2") pod "daff2282-19f8-48c7-8d1b-780fbe97ec5a" (UID: "daff2282-19f8-48c7-8d1b-780fbe97ec5a"). InnerVolumeSpecName "kube-api-access-g9cq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.254632 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.255106 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-kube-api-access-649db" (OuterVolumeSpecName: "kube-api-access-649db") pod "83c6472a-6c69-45ff-b0c2-3bf66e0523f2" (UID: "83c6472a-6c69-45ff-b0c2-3bf66e0523f2"). InnerVolumeSpecName "kube-api-access-649db". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.264837 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.269711 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf69949e-5b98-4cb2-9ce6-f979da44c58e-kube-api-access-qqpjr" (OuterVolumeSpecName: "kube-api-access-qqpjr") pod "bf69949e-5b98-4cb2-9ce6-f979da44c58e" (UID: "bf69949e-5b98-4cb2-9ce6-f979da44c58e"). InnerVolumeSpecName "kube-api-access-qqpjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.322174 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.328659 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.329275 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.329476 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle\") pod \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\" (UID: \"3a92efa4-12b6-431d-8aa7-9baa545f7e07\") " Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.329542 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data" (OuterVolumeSpecName: "config-data") pod "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" (UID: "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.329651 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data\") pod \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\" (UID: \"5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab\") " Nov 23 07:13:44 crc kubenswrapper[5028]: W1123 07:13:44.329726 5028 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3a92efa4-12b6-431d-8aa7-9baa545f7e07/volumes/kubernetes.io~secret/combined-ca-bundle Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.329741 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: W1123 07:13:44.329853 5028 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab/volumes/kubernetes.io~secret/config-data Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.329866 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data" (OuterVolumeSpecName: "config-data") pod "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" (UID: "5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330109 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330128 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330139 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf69949e-5b98-4cb2-9ce6-f979da44c58e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330148 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a92efa4-12b6-431d-8aa7-9baa545f7e07-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330156 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqpjr\" (UniqueName: \"kubernetes.io/projected/bf69949e-5b98-4cb2-9ce6-f979da44c58e-kube-api-access-qqpjr\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330165 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9cq2\" (UniqueName: \"kubernetes.io/projected/daff2282-19f8-48c7-8d1b-780fbe97ec5a-kube-api-access-g9cq2\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330174 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-649db\" (UniqueName: \"kubernetes.io/projected/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-kube-api-access-649db\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330182 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330204 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-logs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330213 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn2t4\" (UniqueName: \"kubernetes.io/projected/534edd12-5e24-4d56-9f1a-944e1ed4a65b-kube-api-access-qn2t4\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330222 5028 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330231 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nphlw\" (UniqueName: \"kubernetes.io/projected/3a92efa4-12b6-431d-8aa7-9baa545f7e07-kube-api-access-nphlw\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330241 5028 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.330234 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.330267 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerName="nova-cell1-conductor-conductor" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330249 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx9nv\" (UniqueName: \"kubernetes.io/projected/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-kube-api-access-tx9nv\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330362 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330374 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330384 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330396 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.330512 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.348990 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.365104 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-config-data" (OuterVolumeSpecName: "config-data") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.377010 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83c6472a-6c69-45ff-b0c2-3bf66e0523f2" (UID: "83c6472a-6c69-45ff-b0c2-3bf66e0523f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.379218 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "534edd12-5e24-4d56-9f1a-944e1ed4a65b" (UID: "534edd12-5e24-4d56-9f1a-944e1ed4a65b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.385232 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.406072 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" (UID: "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.411267 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"daff2282-19f8-48c7-8d1b-780fbe97ec5a","Type":"ContainerDied","Data":"ff9e6978564e48b0c5e0fe28e06b5c777dfa825bcf4f62818eecfad25096338e"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.413123 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.413346 5028 scope.go:117] "RemoveContainer" containerID="718a8645ae9fcf1d2f103db53e0c739b4e8bd3fbca946d7b283e552d5b71b725" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.419351 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"023257e8-ab54-4423-94bc-1f8d547afa69","Type":"ContainerDied","Data":"2fe4470ed9c83e6c510c7c699fbcb65852fe334f40ba5099abb01026c18389a7"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.419429 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.425975 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.425998 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a92efa4-12b6-431d-8aa7-9baa545f7e07","Type":"ContainerDied","Data":"99a6577748e15bd2bece4c27f21d4ea6a0643f13d49a8c18aad3363986e0a1f8"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.426804 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.430862 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9ad676ea-95d4-483f-a7d7-574744376b19","Type":"ContainerDied","Data":"b563de787451a8636f01c99714129d3eab76086d7039663230781fbfb5166b6b"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.430936 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432128 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432356 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432374 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432385 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432395 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432404 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432415 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.432425 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.434119 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron8471-account-delete-6lm46" event={"ID":"bf69949e-5b98-4cb2-9ce6-f979da44c58e","Type":"ContainerDied","Data":"d27f8531e27348ef0f7b4d39e153a1e2cbc0bf1997388932f97d3b542fa626cc"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.434150 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d27f8531e27348ef0f7b4d39e153a1e2cbc0bf1997388932f97d3b542fa626cc" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.434216 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron8471-account-delete-6lm46" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.441539 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50","Type":"ContainerDied","Data":"86f5110817112f98f47b72f5219df5bb46c35adad37d1b330b154abe6695bc39"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.441623 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.446191 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.465320 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9cf78ba6-9116-42cf-8be5-809dd912646c","Type":"ContainerDied","Data":"4be48cf3b8444c4789432751f133cb47bd1fc49c9dccce88ae2fa8170c2a95f0"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.465395 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.471760 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-59c8549d57-5f4m7" event={"ID":"83c6472a-6c69-45ff-b0c2-3bf66e0523f2","Type":"ContainerDied","Data":"6281b766e479a340e6b0612d2e058b8598eb659f2c8a3b77efc44256a18ec2a7"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.471859 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-59c8549d57-5f4m7" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.481150 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d87c9f496-cstmz" event={"ID":"fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c","Type":"ContainerDied","Data":"9072656942ca2c8a425d1380b91f019df6ebd75464c8d48f3e189d6478bcbed7"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.481168 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "9ad676ea-95d4-483f-a7d7-574744376b19" (UID: "9ad676ea-95d4-483f-a7d7-574744376b19"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.481295 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d87c9f496-cstmz" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.481791 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data" (OuterVolumeSpecName: "config-data") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.485692 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" event={"ID":"534edd12-5e24-4d56-9f1a-944e1ed4a65b","Type":"ContainerDied","Data":"2950ac751398b4cd6049e8fdfa0744256d107e8eaf1f0e75375a2020f577ee89"} Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.485857 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.485962 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder0e8f-account-delete-xlplw" podUID="4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" containerName="mariadb-account-delete" containerID="cri-o://e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4" gracePeriod=30 Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.486083 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b4c494dd6-rn255" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.486231 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c59f98478-vbp6r" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.486494 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.486552 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.487313 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novacell08b1e-account-delete-qctpn" podUID="75ea620d-8ec5-47db-a758-11a1e3c9d605" containerName="mariadb-account-delete" containerID="cri-o://c0e438b5feadcc60de0125b6ead3243fc9fdba982d8c527a3af16187c90ff94e" gracePeriod=30 Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.487457 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novaapi35e8-account-delete-8zqfb" podUID="c83094db-b0cb-4be4-a13b-de12d76e1fb0" containerName="mariadb-account-delete" containerID="cri-o://72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3" gracePeriod=30 Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.489961 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican5527-account-delete-7gzrg" podUID="a00e6664-ab67-4532-9c12-89c6fa223993" containerName="mariadb-account-delete" containerID="cri-o://9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537" gracePeriod=30 Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.495283 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance8cc3-account-delete-rmsjp" podUID="261d60cc-ee7d-463e-add8-4a4e8af392cd" containerName="mariadb-account-delete" containerID="cri-o://cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5" gracePeriod=30 Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.500882 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" (UID: "b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.506231 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-config-data" (OuterVolumeSpecName: "config-data") pod "9cf78ba6-9116-42cf-8be5-809dd912646c" (UID: "9cf78ba6-9116-42cf-8be5-809dd912646c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.538789 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.538850 5028 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.538859 5028 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.538872 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.538884 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf78ba6-9116-42cf-8be5-809dd912646c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.542971 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.554255 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.570401 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.577257 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ad676ea-95d4-483f-a7d7-574744376b19" (UID: "9ad676ea-95d4-483f-a7d7-574744376b19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.593321 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.609628 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-config-data" (OuterVolumeSpecName: "config-data") pod "daff2282-19f8-48c7-8d1b-780fbe97ec5a" (UID: "daff2282-19f8-48c7-8d1b-780fbe97ec5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.611989 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "023257e8-ab54-4423-94bc-1f8d547afa69" (UID: "023257e8-ab54-4423-94bc-1f8d547afa69"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.614197 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.618008 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daff2282-19f8-48c7-8d1b-780fbe97ec5a" (UID: "daff2282-19f8-48c7-8d1b-780fbe97ec5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.632425 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-config-data" (OuterVolumeSpecName: "config-data") pod "7d35d26e-c3f7-4597-80c6-60358f2d2c21" (UID: "7d35d26e-c3f7-4597-80c6-60358f2d2c21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640126 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640330 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/023257e8-ab54-4423-94bc-1f8d547afa69-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640394 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d35d26e-c3f7-4597-80c6-60358f2d2c21-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640455 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640519 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640599 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640667 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640728 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640785 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.640919 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daff2282-19f8-48c7-8d1b-780fbe97ec5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.643690 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data" (OuterVolumeSpecName: "config-data") pod "83c6472a-6c69-45ff-b0c2-3bf66e0523f2" (UID: "83c6472a-6c69-45ff-b0c2-3bf66e0523f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.644187 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.648006 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "9ad676ea-95d4-483f-a7d7-574744376b19" (UID: "9ad676ea-95d4-483f-a7d7-574744376b19"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.649427 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3a92efa4-12b6-431d-8aa7-9baa545f7e07" (UID: "3a92efa4-12b6-431d-8aa7-9baa545f7e07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.654175 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data" (OuterVolumeSpecName: "config-data") pod "534edd12-5e24-4d56-9f1a-944e1ed4a65b" (UID: "534edd12-5e24-4d56-9f1a-944e1ed4a65b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.658390 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data" (OuterVolumeSpecName: "config-data") pod "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" (UID: "fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.714462 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2a3fd963-7f73-4069-a993-1dfec6751c57" (UID: "2a3fd963-7f73-4069-a993-1dfec6751c57"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.719533 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.727796 5028 scope.go:117] "RemoveContainer" containerID="0eafcd1a07324ac9778cdfa4b78db65ef912e1e1d8dddb571f38dfd760d9566d" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751647 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751677 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534edd12-5e24-4d56-9f1a-944e1ed4a65b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751690 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751702 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a92efa4-12b6-431d-8aa7-9baa545f7e07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751713 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83c6472a-6c69-45ff-b0c2-3bf66e0523f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751724 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a3fd963-7f73-4069-a993-1dfec6751c57-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.751736 5028 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ad676ea-95d4-483f-a7d7-574744376b19-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.751803 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.751853 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data podName:9a20ff76-1a5a-4070-b5ae-c8baf133c9d7 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:52.751836503 +0000 UTC m=+1416.449241282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data") pod "rabbitmq-cell1-server-0" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7") : configmap "rabbitmq-cell1-config-data" not found Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.752237 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.775227 5028 scope.go:117] "RemoveContainer" containerID="60a864f23434fe7bf4df4b751d820014f9fe0d63d486f0c74863fbcfa326e877" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.775396 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.794741 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.810626 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.823856 5028 scope.go:117] "RemoveContainer" containerID="b98edfeee83e098dad0a822a2d765cd74a15be4763ca4b5c2fbb0a7a9fda7f9f" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.827599 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.860860 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.876369 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.877724 5028 scope.go:117] "RemoveContainer" containerID="30582d733bd8b84f7eb9843365d1cb30de06952daa9b64528d1e925263ed1ee5" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.932046 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.936838 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.942615 5028 scope.go:117] "RemoveContainer" containerID="efc762443a33b269b97ec4b3cde54d3c8c727b78e4871cb2e7039c7badde7203" Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.970905 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.972180 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.973188 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.973222 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerName="nova-scheduler-scheduler" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.977985 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.980282 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: E1123 07:13:44.980331 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts podName:a00e6664-ab67-4532-9c12-89c6fa223993 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:46.980318739 +0000 UTC m=+1410.677723518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts") pod "barbican5527-account-delete-7gzrg" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993") : configmap "openstack-scripts" not found Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.988899 5028 scope.go:117] "RemoveContainer" containerID="5f2108c80300b01fc1d30c52fcb9398b908685239d7a2e69ca2f22d1daf75c65" Nov 23 07:13:44 crc kubenswrapper[5028]: I1123 07:13:44.997498 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.021104 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.031691 5028 scope.go:117] "RemoveContainer" containerID="ac4ead67e47260cf3f74b78ab8afc2ccc45af0107cfaafca7edd1336fddcee80" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.046219 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.081844 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knrnd\" (UniqueName: \"kubernetes.io/projected/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-kube-api-access-knrnd\") pod \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.081969 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-operator-scripts\") pod \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\" (UID: \"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31\") " Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.082465 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.082510 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts podName:261d60cc-ee7d-463e-add8-4a4e8af392cd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:47.082496926 +0000 UTC m=+1410.779901705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts") pod "glance8cc3-account-delete-rmsjp" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd") : configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.083151 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.083195 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts podName:c83094db-b0cb-4be4-a13b-de12d76e1fb0 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:47.083182263 +0000 UTC m=+1410.780587042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts") pod "novaapi35e8-account-delete-8zqfb" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0") : configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.083663 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" (UID: "fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.083721 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.083759 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts podName:75ea620d-8ec5-47db-a758-11a1e3c9d605 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:47.083746267 +0000 UTC m=+1410.781151056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts") pod "novacell08b1e-account-delete-qctpn" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605") : configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.084870 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" path="/var/lib/kubelet/pods/023257e8-ab54-4423-94bc-1f8d547afa69/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.086332 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e8708a8-c087-4267-96c6-2eaa00f1905d" path="/var/lib/kubelet/pods/1e8708a8-c087-4267-96c6-2eaa00f1905d/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.087461 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-kube-api-access-knrnd" (OuterVolumeSpecName: "kube-api-access-knrnd") pod "fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" (UID: "fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31"). InnerVolumeSpecName "kube-api-access-knrnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.087530 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" path="/var/lib/kubelet/pods/230d8024-5d83-4742-9bf9-77bc956dd4a9/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.095230 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" path="/var/lib/kubelet/pods/3a92efa4-12b6-431d-8aa7-9baa545f7e07/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.096231 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ad8dce-7019-4338-9916-70bee8bdcf00" path="/var/lib/kubelet/pods/51ad8dce-7019-4338-9916-70bee8bdcf00/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.099196 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" path="/var/lib/kubelet/pods/5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.099920 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad676ea-95d4-483f-a7d7-574744376b19" path="/var/lib/kubelet/pods/9ad676ea-95d4-483f-a7d7-574744376b19/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.101670 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" path="/var/lib/kubelet/pods/b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.102387 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0" path="/var/lib/kubelet/pods/c3934fdc-6c48-4cd9-89e3-1fbbc62b4ca0/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.102848 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" path="/var/lib/kubelet/pods/daff2282-19f8-48c7-8d1b-780fbe97ec5a/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.104416 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed0b2201-09f6-4478-b00a-285dcd96ae12" path="/var/lib/kubelet/pods/ed0b2201-09f6-4478-b00a-285dcd96ae12/volumes" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.105817 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.105841 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.105854 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7d87c9f496-cstmz"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.105883 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7d87c9f496-cstmz"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.108202 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5c59f98478-vbp6r"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.117874 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5c59f98478-vbp6r"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.118671 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-59c8549d57-5f4m7"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.126811 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-59c8549d57-5f4m7"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.129799 5028 scope.go:117] "RemoveContainer" containerID="8b01950aabfee244fdd553d635003949a329f35cf0bf54c41d11700015415cf0" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.131927 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6b4c494dd6-rn255"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.145888 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6b4c494dd6-rn255"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.155869 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.164924 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.183119 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knrnd\" (UniqueName: \"kubernetes.io/projected/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-kube-api-access-knrnd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.183139 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.274900 5028 scope.go:117] "RemoveContainer" containerID="ea5fcb780cb3db6a3d6792d1be448395cc967da3e44687458ed85dea64130699" Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.356049 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.357516 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.360775 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.360832 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.361504 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.362888 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.365643 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.365705 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.386762 5028 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.386845 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data podName:8399afb1-fbd2-4ce0-b980-46b317d6cfee nodeName:}" failed. No retries permitted until 2025-11-23 07:13:53.386823796 +0000 UTC m=+1417.084228575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data") pod "rabbitmq-server-0" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee") : configmap "rabbitmq-config-data" not found Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.424327 5028 scope.go:117] "RemoveContainer" containerID="8cdb5b500524c4ab468eb818cb3106416ad89266ed3a6334d118c5a750b1d5a5" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.457990 5028 scope.go:117] "RemoveContainer" containerID="f479f9aaf79c6e583bdbd977ec74f93da1e58b297f19fd2858b002f4f930227c" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.503189 5028 generic.go:334] "Generic (PLEG): container finished" podID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerID="a956eb2eee86d43afc41626c37352de689349467bd476d6a7ecbf7c28a1afb07" exitCode=0 Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.503268 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7","Type":"ContainerDied","Data":"a956eb2eee86d43afc41626c37352de689349467bd476d6a7ecbf7c28a1afb07"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.506966 5028 generic.go:334] "Generic (PLEG): container finished" podID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerID="54f05a9f66c1cff1703e9f91ab40df9d1d549e086aad84a05f1c7861710e604f" exitCode=0 Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.507028 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8399afb1-fbd2-4ce0-b980-46b317d6cfee","Type":"ContainerDied","Data":"54f05a9f66c1cff1703e9f91ab40df9d1d549e086aad84a05f1c7861710e604f"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.508228 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement31f4-account-delete-9dth6" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.508458 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement31f4-account-delete-9dth6" event={"ID":"fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31","Type":"ContainerDied","Data":"c2002946611372b709f78d1ed62680c81900af1f93edce97a316f81a8f05ec30"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.520740 5028 generic.go:334] "Generic (PLEG): container finished" podID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerID="241f9f536ab38394348af160013fb0390313172f6bde249dc5d6d02b4ba10fb4" exitCode=0 Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.520801 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"88c70fc5-621a-45c5-bcf1-716d14e48792","Type":"ContainerDied","Data":"241f9f536ab38394348af160013fb0390313172f6bde249dc5d6d02b4ba10fb4"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.520823 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"88c70fc5-621a-45c5-bcf1-716d14e48792","Type":"ContainerDied","Data":"1b54da68b1421bdfab168a510797e851397eb5fe8e0a19a20a7f370e7d47cc08"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.520835 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b54da68b1421bdfab168a510797e851397eb5fe8e0a19a20a7f370e7d47cc08" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.524278 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5f67379-72d4-46f4-844f-b00c8f912169" containerID="0825116d750ee80fd532b8320d8c85f3b4fe208475c0c2c3203bb0ac33a3586a" exitCode=0 Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.524331 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f64857b-ngrdb" event={"ID":"f5f67379-72d4-46f4-844f-b00c8f912169","Type":"ContainerDied","Data":"0825116d750ee80fd532b8320d8c85f3b4fe208475c0c2c3203bb0ac33a3586a"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.526027 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1/ovn-northd/0.log" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.526509 5028 generic.go:334] "Generic (PLEG): container finished" podID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" exitCode=139 Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.526574 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1","Type":"ContainerDied","Data":"133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c"} Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.528008 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystoneeb1f-account-delete-fxd29" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.590320 5028 scope.go:117] "RemoveContainer" containerID="c9b884444e010e2f9bac9f3e6dce5c53204fff813817cf07388304fa6d747bab" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.625736 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.626469 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement31f4-account-delete-9dth6"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.644290 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement31f4-account-delete-9dth6"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.653456 5028 scope.go:117] "RemoveContainer" containerID="e31414b733266ebd168e3f95a8474d2ad5c2d2b753b7b52893516e95d7e66b97" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.657390 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystoneeb1f-account-delete-fxd29"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.668828 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystoneeb1f-account-delete-fxd29"] Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.689873 5028 scope.go:117] "RemoveContainer" containerID="d694a3f6d26807a42af11ccff6f7020afb5c194adb048d335e4b7a16a625a72f" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691232 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvgp8\" (UniqueName: \"kubernetes.io/projected/88c70fc5-621a-45c5-bcf1-716d14e48792-kube-api-access-cvgp8\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691296 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-default\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691342 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-galera-tls-certs\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691377 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-operator-scripts\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691427 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-kolla-config\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691459 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-combined-ca-bundle\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691534 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.691560 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-generated\") pod \"88c70fc5-621a-45c5-bcf1-716d14e48792\" (UID: \"88c70fc5-621a-45c5-bcf1-716d14e48792\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.692270 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/841776d0-3a86-46aa-9b13-86a9060620d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.692296 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tw7j\" (UniqueName: \"kubernetes.io/projected/841776d0-3a86-46aa-9b13-86a9060620d7-kube-api-access-5tw7j\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.692587 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.692750 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.692798 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.693056 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.698967 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c70fc5-621a-45c5-bcf1-716d14e48792-kube-api-access-cvgp8" (OuterVolumeSpecName: "kube-api-access-cvgp8") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "kube-api-access-cvgp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.737357 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.738123 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.757270 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7xfsr" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" probeResult="failure" output="command timed out" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.760273 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1/ovn-northd/0.log" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.767073 5028 scope.go:117] "RemoveContainer" containerID="63b8e89fb1de2fc338f25d77ae5a049681cdad5b98161e7cec9b04e9d722fa94" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.771478 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.774549 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "88c70fc5-621a-45c5-bcf1-716d14e48792" (UID: "88c70fc5-621a-45c5-bcf1-716d14e48792"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.775238 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.796923 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-erlang-cookie-secret\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.796983 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797002 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-plugins-conf\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797029 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t49w2\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-kube-api-access-t49w2\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797061 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-rundir\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797079 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-combined-ca-bundle\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797116 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797133 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-plugins\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797151 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-confd\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797211 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-northd-tls-certs\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797255 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-pod-info\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797273 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-server-conf\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797294 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-metrics-certs-tls-certs\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797318 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-scripts\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797346 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pmq5\" (UniqueName: \"kubernetes.io/projected/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-kube-api-access-2pmq5\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797363 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-erlang-cookie\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797381 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-config\") pod \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\" (UID: \"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797405 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-tls\") pod \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\" (UID: \"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797804 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797824 5028 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797835 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797846 5028 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88c70fc5-621a-45c5-bcf1-716d14e48792-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797857 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c70fc5-621a-45c5-bcf1-716d14e48792-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797880 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797891 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/88c70fc5-621a-45c5-bcf1-716d14e48792-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.797907 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvgp8\" (UniqueName: \"kubernetes.io/projected/88c70fc5-621a-45c5-bcf1-716d14e48792-kube-api-access-cvgp8\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.799638 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.801163 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.801599 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.802442 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-config" (OuterVolumeSpecName: "config") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.802663 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.803042 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-scripts" (OuterVolumeSpecName: "scripts") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.803428 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7xfsr" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" probeResult="failure" output=< Nov 23 07:13:45 crc kubenswrapper[5028]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Nov 23 07:13:45 crc kubenswrapper[5028]: > Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.803930 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.804007 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.804018 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.804028 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.804042 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.804096 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:53.804077646 +0000 UTC m=+1417.501482425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:45 crc kubenswrapper[5028]: E1123 07:13:45.804136 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:49.804107336 +0000 UTC m=+1413.501512115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.812658 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.812791 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-kube-api-access-2pmq5" (OuterVolumeSpecName: "kube-api-access-2pmq5") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "kube-api-access-2pmq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.812831 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.813412 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.819302 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-kube-api-access-t49w2" (OuterVolumeSpecName: "kube-api-access-t49w2") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "kube-api-access-t49w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.827518 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.831357 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-pod-info" (OuterVolumeSpecName: "pod-info") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.843856 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.852791 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data" (OuterVolumeSpecName: "config-data") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.868590 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899366 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-server-conf\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899419 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899481 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-erlang-cookie\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899516 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899545 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-confd\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899579 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8399afb1-fbd2-4ce0-b980-46b317d6cfee-pod-info\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899629 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-tls\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899646 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-plugins-conf\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899677 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8399afb1-fbd2-4ce0-b980-46b317d6cfee-erlang-cookie-secret\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899703 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h7ph\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-kube-api-access-9h7ph\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.899731 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-plugins\") pod \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\" (UID: \"8399afb1-fbd2-4ce0-b980-46b317d6cfee\") " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900136 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900149 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900159 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900168 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900186 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900194 5028 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900202 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900210 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pmq5\" (UniqueName: \"kubernetes.io/projected/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-kube-api-access-2pmq5\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900220 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900228 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900236 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900243 5028 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900277 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900286 5028 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.900294 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t49w2\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-kube-api-access-t49w2\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.905460 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.906594 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.906656 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.908776 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.915758 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8399afb1-fbd2-4ce0-b980-46b317d6cfee-pod-info" (OuterVolumeSpecName: "pod-info") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.918510 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.918710 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-kube-api-access-9h7ph" (OuterVolumeSpecName: "kube-api-access-9h7ph") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "kube-api-access-9h7ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.920489 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-server-conf" (OuterVolumeSpecName: "server-conf") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.921161 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8399afb1-fbd2-4ce0-b980-46b317d6cfee-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.931991 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.944828 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data" (OuterVolumeSpecName: "config-data") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.950218 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.952661 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.979591 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-server-conf" (OuterVolumeSpecName: "server-conf") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.983209 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" (UID: "7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:45 crc kubenswrapper[5028]: I1123 07:13:45.988248 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" (UID: "9a20ff76-1a5a-4070-b5ae-c8baf133c9d7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002191 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2bxf\" (UniqueName: \"kubernetes.io/projected/f5f67379-72d4-46f4-844f-b00c8f912169-kube-api-access-l2bxf\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002254 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-credential-keys\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002296 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-internal-tls-certs\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002342 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-fernet-keys\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002431 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-combined-ca-bundle\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002458 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-scripts\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002474 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-config-data\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002509 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-public-tls-certs\") pod \"f5f67379-72d4-46f4-844f-b00c8f912169\" (UID: \"f5f67379-72d4-46f4-844f-b00c8f912169\") " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002841 5028 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002868 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002878 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002888 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002896 5028 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8399afb1-fbd2-4ce0-b980-46b317d6cfee-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002906 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002914 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002922 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002930 5028 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8399afb1-fbd2-4ce0-b980-46b317d6cfee-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.002938 5028 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8399afb1-fbd2-4ce0-b980-46b317d6cfee-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.003003 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h7ph\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-kube-api-access-9h7ph\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.003013 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.003024 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.003032 5028 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.003040 5028 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.007038 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-scripts" (OuterVolumeSpecName: "scripts") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.013277 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.013317 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f67379-72d4-46f4-844f-b00c8f912169-kube-api-access-l2bxf" (OuterVolumeSpecName: "kube-api-access-l2bxf") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "kube-api-access-l2bxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.014443 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.028143 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.039474 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8399afb1-fbd2-4ce0-b980-46b317d6cfee" (UID: "8399afb1-fbd2-4ce0-b980-46b317d6cfee"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.041413 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.041560 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-config-data" (OuterVolumeSpecName: "config-data") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.064502 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.071134 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f5f67379-72d4-46f4-844f-b00c8f912169" (UID: "f5f67379-72d4-46f4-844f-b00c8f912169"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.104858 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2bxf\" (UniqueName: \"kubernetes.io/projected/f5f67379-72d4-46f4-844f-b00c8f912169-kube-api-access-l2bxf\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105126 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8399afb1-fbd2-4ce0-b980-46b317d6cfee-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105139 5028 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105148 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105156 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105164 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105171 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105179 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105187 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5f67379-72d4-46f4-844f-b00c8f912169-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:46 crc kubenswrapper[5028]: I1123 07:13:46.105196 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.553154 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77b69c59d9-28nfd" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.553383 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-77b69c59d9-28nfd" podUID="230d8024-5d83-4742-9bf9-77bc956dd4a9" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.167:8080/healthcheck\": dial tcp 10.217.0.167:8080: i/o timeout" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.553849 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9a20ff76-1a5a-4070-b5ae-c8baf133c9d7","Type":"ContainerDied","Data":"08e66610e24174c7e42cf6ba43bd7183bda3dd3c5bb57c8d1313ffe879923d19"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.553882 5028 scope.go:117] "RemoveContainer" containerID="a956eb2eee86d43afc41626c37352de689349467bd476d6a7ecbf7c28a1afb07" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.553994 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:46.557779 5028 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 23 07:13:47 crc kubenswrapper[5028]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-23T07:13:39Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 23 07:13:47 crc kubenswrapper[5028]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 23 07:13:47 crc kubenswrapper[5028]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-7xfsr" message=< Nov 23 07:13:47 crc kubenswrapper[5028]: Exiting ovn-controller (1) [FAILED] Nov 23 07:13:47 crc kubenswrapper[5028]: Killing ovn-controller (1) [ OK ] Nov 23 07:13:47 crc kubenswrapper[5028]: Killing ovn-controller (1) with SIGKILL [ OK ] Nov 23 07:13:47 crc kubenswrapper[5028]: 2025-11-23T07:13:39Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 23 07:13:47 crc kubenswrapper[5028]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 23 07:13:47 crc kubenswrapper[5028]: > Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:46.557822 5028 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 23 07:13:47 crc kubenswrapper[5028]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2025-11-23T07:13:39Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Nov 23 07:13:47 crc kubenswrapper[5028]: /etc/init.d/functions: line 589: 400 Alarm clock "$@" Nov 23 07:13:47 crc kubenswrapper[5028]: > pod="openstack/ovn-controller-7xfsr" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" containerID="cri-o://964d01f35d8b75183e2fe8b77259a378a53f4e6342c8d59bbcaf1c413ad35077" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.557859 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-7xfsr" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" containerID="cri-o://964d01f35d8b75183e2fe8b77259a378a53f4e6342c8d59bbcaf1c413ad35077" gracePeriod=22 Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.567341 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f64857b-ngrdb" event={"ID":"f5f67379-72d4-46f4-844f-b00c8f912169","Type":"ContainerDied","Data":"d58f844ef33434dcc4d87cdb6f841c511856443e9b8eb6494a0141400fb4de32"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.567410 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f64857b-ngrdb" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.587320 5028 generic.go:334] "Generic (PLEG): container finished" podID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerID="18362900b5e9f33833027654ded3247f26684450ed496934014046dcf552169b" exitCode=0 Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.587380 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerDied","Data":"18362900b5e9f33833027654ded3247f26684450ed496934014046dcf552169b"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.597045 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1/ovn-northd/0.log" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.598310 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1","Type":"ContainerDied","Data":"529e550bfb69799e339e1814672f0eaf2c52bca462b4e1d5c98b2da2af515ceb"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.598385 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.622122 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.622504 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.623839 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8399afb1-fbd2-4ce0-b980-46b317d6cfee","Type":"ContainerDied","Data":"be7964c75ec89318e27a0a2810585c7f6186df68011439570c0df8cecfaddff7"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.914189 5028 scope.go:117] "RemoveContainer" containerID="5a0f902d9eb5361838184035de7e15282cacda3c6de606d046603750a68274e8" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.926372 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:46.934515 5028 scope.go:117] "RemoveContainer" containerID="0825116d750ee80fd532b8320d8c85f3b4fe208475c0c2c3203bb0ac33a3586a" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.003519 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.035275 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.037705 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvzjn\" (UniqueName: \"kubernetes.io/projected/9968b58c-c1a8-4491-b918-2c1cd8f56695-kube-api-access-dvzjn\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.037785 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-run-httpd\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.037848 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-ceilometer-tls-certs\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.037924 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-combined-ca-bundle\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.037978 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-config-data\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.038003 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-scripts\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.038027 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-sg-core-conf-yaml\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.038065 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-log-httpd\") pod \"9968b58c-c1a8-4491-b918-2c1cd8f56695\" (UID: \"9968b58c-c1a8-4491-b918-2c1cd8f56695\") " Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.038713 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.038765 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts podName:a00e6664-ab67-4532-9c12-89c6fa223993 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:51.038745217 +0000 UTC m=+1414.736149996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts") pod "barbican5527-account-delete-7gzrg" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993") : configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.040677 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.042004 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.047167 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9968b58c-c1a8-4491-b918-2c1cd8f56695-kube-api-access-dvzjn" (OuterVolumeSpecName: "kube-api-access-dvzjn") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "kube-api-access-dvzjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.049361 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-scripts" (OuterVolumeSpecName: "scripts") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.051547 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.056343 5028 scope.go:117] "RemoveContainer" containerID="133820101a3796a794f34d61a8feaf21dcc90f5a0523a8703ad78ae7122e121c" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.072701 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" path="/var/lib/kubelet/pods/2a3fd963-7f73-4069-a993-1dfec6751c57/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.073401 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" path="/var/lib/kubelet/pods/534edd12-5e24-4d56-9f1a-944e1ed4a65b/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.075110 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" path="/var/lib/kubelet/pods/7d35d26e-c3f7-4597-80c6-60358f2d2c21/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.076118 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" path="/var/lib/kubelet/pods/83c6472a-6c69-45ff-b0c2-3bf66e0523f2/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.077340 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="841776d0-3a86-46aa-9b13-86a9060620d7" path="/var/lib/kubelet/pods/841776d0-3a86-46aa-9b13-86a9060620d7/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.077982 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" path="/var/lib/kubelet/pods/9a20ff76-1a5a-4070-b5ae-c8baf133c9d7/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.079091 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" path="/var/lib/kubelet/pods/9cf78ba6-9116-42cf-8be5-809dd912646c/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.080896 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" path="/var/lib/kubelet/pods/fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.084503 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.087310 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" path="/var/lib/kubelet/pods/fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c/volumes" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.087660 5028 scope.go:117] "RemoveContainer" containerID="1ff78194c0b6fef4a35d4ce365c5ce7a085c94cc689d51e542e7782f680a7d0a" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.091241 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.091281 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.091299 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.093012 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8nbsx"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.098850 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8nbsx"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.103994 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.109143 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.112832 5028 scope.go:117] "RemoveContainer" containerID="54f05a9f66c1cff1703e9f91ab40df9d1d549e086aad84a05f1c7861710e604f" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.113536 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8471-account-create-4q7s2"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.118167 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.119336 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron8471-account-delete-6lm46"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.124915 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8471-account-create-4q7s2"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.133158 5028 scope.go:117] "RemoveContainer" containerID="8d15922df2cca35d78979c23ab251a4e5ad6c02f4fa23139d560e3bd174d432a" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.133903 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron8471-account-delete-6lm46"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.136117 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.140455 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.140527 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts podName:c83094db-b0cb-4be4-a13b-de12d76e1fb0 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:51.140507215 +0000 UTC m=+1414.837911994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts") pod "novaapi35e8-account-delete-8zqfb" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0") : configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.140530 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.140761 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts podName:261d60cc-ee7d-463e-add8-4a4e8af392cd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:51.140747281 +0000 UTC m=+1414.838152060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts") pod "glance8cc3-account-delete-rmsjp" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd") : configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140847 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-79f64857b-ngrdb"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140865 5028 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140877 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140887 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140896 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140905 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.140926 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.140935 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvzjn\" (UniqueName: \"kubernetes.io/projected/9968b58c-c1a8-4491-b918-2c1cd8f56695-kube-api-access-dvzjn\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: E1123 07:13:47.140969 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts podName:75ea620d-8ec5-47db-a758-11a1e3c9d605 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:51.140961916 +0000 UTC m=+1414.838366695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts") pod "novacell08b1e-account-delete-qctpn" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605") : configmap "openstack-scripts" not found Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.141019 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9968b58c-c1a8-4491-b918-2c1cd8f56695-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.147920 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-79f64857b-ngrdb"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.152084 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-config-data" (OuterVolumeSpecName: "config-data") pod "9968b58c-c1a8-4491-b918-2c1cd8f56695" (UID: "9968b58c-c1a8-4491-b918-2c1cd8f56695"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.242241 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9968b58c-c1a8-4491-b918-2c1cd8f56695-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.633191 5028 generic.go:334] "Generic (PLEG): container finished" podID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" exitCode=0 Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.633260 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c60c122c-3364-4717-bc34-5a610c1a1ac8","Type":"ContainerDied","Data":"d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.636707 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9968b58c-c1a8-4491-b918-2c1cd8f56695","Type":"ContainerDied","Data":"906ebed0ef1eb9b7876f5cb2260949844121f6efc53c048a29009fc39e4bb7db"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.636740 5028 scope.go:117] "RemoveContainer" containerID="66ffc82e2a92f52c482cb6d579b807e3af34155790eb820d2d16eecf464cd86d" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.637009 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.644181 5028 generic.go:334] "Generic (PLEG): container finished" podID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" exitCode=0 Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.644250 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ea19991-fd9e-4f02-a48f-e6bc67848e43","Type":"ContainerDied","Data":"a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.646062 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7xfsr_8856612c-6e19-4bcc-86ab-f5fd8f75896b/ovn-controller/0.log" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.646104 5028 generic.go:334] "Generic (PLEG): container finished" podID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerID="964d01f35d8b75183e2fe8b77259a378a53f4e6342c8d59bbcaf1c413ad35077" exitCode=137 Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.646161 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr" event={"ID":"8856612c-6e19-4bcc-86ab-f5fd8f75896b","Type":"ContainerDied","Data":"964d01f35d8b75183e2fe8b77259a378a53f4e6342c8d59bbcaf1c413ad35077"} Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.676573 5028 scope.go:117] "RemoveContainer" containerID="fb676e1b2e2c02ba89cd38ce4f10b302e56efd80c670bac63573599363bc4fb7" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.685623 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.694863 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.710237 5028 scope.go:117] "RemoveContainer" containerID="18362900b5e9f33833027654ded3247f26684450ed496934014046dcf552169b" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.782358 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.786671 5028 scope.go:117] "RemoveContainer" containerID="dde5c69666e70bb8f75a7ed7a789c3ae7acdc1f1acf508ccc160a39ef016782c" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.790371 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7xfsr_8856612c-6e19-4bcc-86ab-f5fd8f75896b/ovn-controller/0.log" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.790438 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.807159 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.942756 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": context deadline exceeded" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.942793 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.957850 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hng4t\" (UniqueName: \"kubernetes.io/projected/c60c122c-3364-4717-bc34-5a610c1a1ac8-kube-api-access-hng4t\") pod \"c60c122c-3364-4717-bc34-5a610c1a1ac8\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.957900 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-combined-ca-bundle\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.957928 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-782dx\" (UniqueName: \"kubernetes.io/projected/7ea19991-fd9e-4f02-a48f-e6bc67848e43-kube-api-access-782dx\") pod \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.957979 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run-ovn\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958021 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-config-data\") pod \"c60c122c-3364-4717-bc34-5a610c1a1ac8\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958105 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-ovn-controller-tls-certs\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958118 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958155 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-combined-ca-bundle\") pod \"c60c122c-3364-4717-bc34-5a610c1a1ac8\" (UID: \"c60c122c-3364-4717-bc34-5a610c1a1ac8\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958209 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-combined-ca-bundle\") pod \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958242 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbpp\" (UniqueName: \"kubernetes.io/projected/8856612c-6e19-4bcc-86ab-f5fd8f75896b-kube-api-access-ntbpp\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958310 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run" (OuterVolumeSpecName: "var-run") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958567 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-log-ovn\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958764 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-config-data\") pod \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\" (UID: \"7ea19991-fd9e-4f02-a48f-e6bc67848e43\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.958813 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8856612c-6e19-4bcc-86ab-f5fd8f75896b-scripts\") pod \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\" (UID: \"8856612c-6e19-4bcc-86ab-f5fd8f75896b\") " Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.959067 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.959708 5028 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.959733 5028 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.959746 5028 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8856612c-6e19-4bcc-86ab-f5fd8f75896b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.960236 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8856612c-6e19-4bcc-86ab-f5fd8f75896b-scripts" (OuterVolumeSpecName: "scripts") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.962653 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60c122c-3364-4717-bc34-5a610c1a1ac8-kube-api-access-hng4t" (OuterVolumeSpecName: "kube-api-access-hng4t") pod "c60c122c-3364-4717-bc34-5a610c1a1ac8" (UID: "c60c122c-3364-4717-bc34-5a610c1a1ac8"). InnerVolumeSpecName "kube-api-access-hng4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.963140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea19991-fd9e-4f02-a48f-e6bc67848e43-kube-api-access-782dx" (OuterVolumeSpecName: "kube-api-access-782dx") pod "7ea19991-fd9e-4f02-a48f-e6bc67848e43" (UID: "7ea19991-fd9e-4f02-a48f-e6bc67848e43"). InnerVolumeSpecName "kube-api-access-782dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.963350 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8856612c-6e19-4bcc-86ab-f5fd8f75896b-kube-api-access-ntbpp" (OuterVolumeSpecName: "kube-api-access-ntbpp") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "kube-api-access-ntbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.988396 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c60c122c-3364-4717-bc34-5a610c1a1ac8" (UID: "c60c122c-3364-4717-bc34-5a610c1a1ac8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.988678 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ea19991-fd9e-4f02-a48f-e6bc67848e43" (UID: "7ea19991-fd9e-4f02-a48f-e6bc67848e43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.990376 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-config-data" (OuterVolumeSpecName: "config-data") pod "7ea19991-fd9e-4f02-a48f-e6bc67848e43" (UID: "7ea19991-fd9e-4f02-a48f-e6bc67848e43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:47 crc kubenswrapper[5028]: I1123 07:13:47.993483 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.001828 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-config-data" (OuterVolumeSpecName: "config-data") pod "c60c122c-3364-4717-bc34-5a610c1a1ac8" (UID: "c60c122c-3364-4717-bc34-5a610c1a1ac8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.023474 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "8856612c-6e19-4bcc-86ab-f5fd8f75896b" (UID: "8856612c-6e19-4bcc-86ab-f5fd8f75896b"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060618 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060650 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8856612c-6e19-4bcc-86ab-f5fd8f75896b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060660 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hng4t\" (UniqueName: \"kubernetes.io/projected/c60c122c-3364-4717-bc34-5a610c1a1ac8-kube-api-access-hng4t\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060670 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060680 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-782dx\" (UniqueName: \"kubernetes.io/projected/7ea19991-fd9e-4f02-a48f-e6bc67848e43-kube-api-access-782dx\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060690 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060701 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8856612c-6e19-4bcc-86ab-f5fd8f75896b-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060711 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c60c122c-3364-4717-bc34-5a610c1a1ac8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060723 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ea19991-fd9e-4f02-a48f-e6bc67848e43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.060735 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbpp\" (UniqueName: \"kubernetes.io/projected/8856612c-6e19-4bcc-86ab-f5fd8f75896b-kube-api-access-ntbpp\") on node \"crc\" DevicePath \"\"" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.659619 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.660526 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c60c122c-3364-4717-bc34-5a610c1a1ac8","Type":"ContainerDied","Data":"17c5ce5eca3e692849207165f20c490b180f8c5e9551ff30cb56f7c090630fd1"} Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.660563 5028 scope.go:117] "RemoveContainer" containerID="d6c6c2b0973fb09e2931ebaf6b2ebfa36cb6d6827a4b50ec42d1feca28d8fc79" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.663048 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7ea19991-fd9e-4f02-a48f-e6bc67848e43","Type":"ContainerDied","Data":"c137446641ac0423cf8ec6d13ee8140245a2c709921537283a41f1791c29c3f7"} Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.663111 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.671714 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7xfsr_8856612c-6e19-4bcc-86ab-f5fd8f75896b/ovn-controller/0.log" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.671751 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xfsr" event={"ID":"8856612c-6e19-4bcc-86ab-f5fd8f75896b","Type":"ContainerDied","Data":"272f84c189e1b28c413f2511fdddf4d927253b1101562b5e7a320d8b4c18d772"} Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.671797 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xfsr" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.688463 5028 scope.go:117] "RemoveContainer" containerID="a416e2da80908614f6f85af4e7f0d43ee6c26c2c5ace8bd04e5a82c8eb5d9116" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.695498 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.704329 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.715217 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.722103 5028 scope.go:117] "RemoveContainer" containerID="964d01f35d8b75183e2fe8b77259a378a53f4e6342c8d59bbcaf1c413ad35077" Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.722232 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.727366 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7xfsr"] Nov 23 07:13:48 crc kubenswrapper[5028]: I1123 07:13:48.733129 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7xfsr"] Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.063776 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60cf19a2-c7e4-40db-a8f9-6d562989323a" path="/var/lib/kubelet/pods/60cf19a2-c7e4-40db-a8f9-6d562989323a/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.064461 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" path="/var/lib/kubelet/pods/7ea19991-fd9e-4f02-a48f-e6bc67848e43/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.065174 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" path="/var/lib/kubelet/pods/7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.066770 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" path="/var/lib/kubelet/pods/8399afb1-fbd2-4ce0-b980-46b317d6cfee/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.067876 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" path="/var/lib/kubelet/pods/8856612c-6e19-4bcc-86ab-f5fd8f75896b/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.068676 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" path="/var/lib/kubelet/pods/88c70fc5-621a-45c5-bcf1-716d14e48792/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.070497 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" path="/var/lib/kubelet/pods/9968b58c-c1a8-4491-b918-2c1cd8f56695/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.071602 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0d6135-0757-4a02-9c31-ccde549d04e6" path="/var/lib/kubelet/pods/be0d6135-0757-4a02-9c31-ccde549d04e6/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.073274 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf69949e-5b98-4cb2-9ce6-f979da44c58e" path="/var/lib/kubelet/pods/bf69949e-5b98-4cb2-9ce6-f979da44c58e/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.074607 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" path="/var/lib/kubelet/pods/c60c122c-3364-4717-bc34-5a610c1a1ac8/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: I1123 07:13:49.075886 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f67379-72d4-46f4-844f-b00c8f912169" path="/var/lib/kubelet/pods/f5f67379-72d4-46f4-844f-b00c8f912169/volumes" Nov 23 07:13:49 crc kubenswrapper[5028]: E1123 07:13:49.887819 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:49 crc kubenswrapper[5028]: E1123 07:13:49.887903 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:57.887881744 +0000 UTC m=+1421.585286533 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.354996 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.355328 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.355578 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.355623 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.356494 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.360912 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.363678 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:50 crc kubenswrapper[5028]: E1123 07:13:50.363735 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.115056 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.116511 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts podName:a00e6664-ab67-4532-9c12-89c6fa223993 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:59.115731369 +0000 UTC m=+1422.813136208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts") pod "barbican5527-account-delete-7gzrg" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993") : configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.218540 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.218699 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts podName:75ea620d-8ec5-47db-a758-11a1e3c9d605 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:59.218665625 +0000 UTC m=+1422.916070414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts") pod "novacell08b1e-account-delete-qctpn" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605") : configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.218787 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.218840 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts podName:261d60cc-ee7d-463e-add8-4a4e8af392cd nodeName:}" failed. No retries permitted until 2025-11-23 07:13:59.218824439 +0000 UTC m=+1422.916229218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts") pod "glance8cc3-account-delete-rmsjp" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd") : configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.218872 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:51 crc kubenswrapper[5028]: E1123 07:13:51.218893 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts podName:c83094db-b0cb-4be4-a13b-de12d76e1fb0 nodeName:}" failed. No retries permitted until 2025-11-23 07:13:59.218887131 +0000 UTC m=+1422.916291910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts") pod "novaapi35e8-account-delete-8zqfb" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0") : configmap "openstack-scripts" not found Nov 23 07:13:53 crc kubenswrapper[5028]: E1123 07:13:53.854094 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:13:53 crc kubenswrapper[5028]: E1123 07:13:53.854396 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:13:53 crc kubenswrapper[5028]: E1123 07:13:53.854405 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:13:53 crc kubenswrapper[5028]: E1123 07:13:53.854417 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:53 crc kubenswrapper[5028]: E1123 07:13:53.854470 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:14:09.854452267 +0000 UTC m=+1433.551857046 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.355466 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.355815 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.356218 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.356251 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.357244 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.358778 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.360009 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.360045 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.751075 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hvmvb"] Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752047 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752069 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-api" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752082 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752089 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-api" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752102 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="openstack-network-exporter" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752109 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="openstack-network-exporter" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752120 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752126 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752135 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerName="setup-container" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752141 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerName="setup-container" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752147 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752153 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752159 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752164 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752173 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="sg-core" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752179 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="sg-core" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752187 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerName="rabbitmq" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752193 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerName="rabbitmq" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752203 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="galera" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752209 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="galera" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752216 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752222 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752235 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="proxy-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752241 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="proxy-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752251 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752257 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752266 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752271 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752282 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752287 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752297 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752303 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752311 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" containerName="memcached" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752316 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" containerName="memcached" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752326 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" containerName="nova-cell0-conductor-conductor" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752332 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" containerName="nova-cell0-conductor-conductor" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752341 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-metadata" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752348 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-metadata" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752355 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerName="setup-container" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752361 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerName="setup-container" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752371 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752376 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752383 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="ovn-northd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752389 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="ovn-northd" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752398 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" containerName="mariadb-account-delete" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752404 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" containerName="mariadb-account-delete" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752421 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752426 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752436 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752441 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752449 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="mysql-bootstrap" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752455 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="mysql-bootstrap" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752464 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-central-agent" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752469 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-central-agent" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752479 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5f67379-72d4-46f4-844f-b00c8f912169" containerName="keystone-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752484 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5f67379-72d4-46f4-844f-b00c8f912169" containerName="keystone-api" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752492 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerName="rabbitmq" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752497 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerName="rabbitmq" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752507 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752512 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752522 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad676ea-95d4-483f-a7d7-574744376b19" containerName="kube-state-metrics" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752527 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad676ea-95d4-483f-a7d7-574744376b19" containerName="kube-state-metrics" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752536 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752541 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752552 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerName="nova-scheduler-scheduler" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752558 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerName="nova-scheduler-scheduler" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752568 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752573 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752584 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf69949e-5b98-4cb2-9ce6-f979da44c58e" containerName="mariadb-account-delete" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752589 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf69949e-5b98-4cb2-9ce6-f979da44c58e" containerName="mariadb-account-delete" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752598 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-notification-agent" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752604 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-notification-agent" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752615 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752620 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker-log" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752630 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752635 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener" Nov 23 07:13:55 crc kubenswrapper[5028]: E1123 07:13:55.752642 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerName="nova-cell1-conductor-conductor" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752648 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerName="nova-cell1-conductor-conductor" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752796 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf2db0d-a7c2-4e77-b93c-6a2c1e5dec31" containerName="mariadb-account-delete" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752811 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c70fc5-621a-45c5-bcf1-716d14e48792" containerName="galera" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752821 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="daff2282-19f8-48c7-8d1b-780fbe97ec5a" containerName="nova-cell0-conductor-conductor" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752830 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f67379-72d4-46f4-844f-b00c8f912169" containerName="keystone-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752838 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-central-agent" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752844 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752851 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a20ff76-1a5a-4070-b5ae-c8baf133c9d7" containerName="rabbitmq" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752859 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="ceilometer-notification-agent" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752870 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752880 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="023257e8-ab54-4423-94bc-1f8d547afa69" containerName="cinder-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752889 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752899 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752907 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8856612c-6e19-4bcc-86ab-f5fd8f75896b" containerName="ovn-controller" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752917 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-metadata" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752927 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea19991-fd9e-4f02-a48f-e6bc67848e43" containerName="nova-cell1-conductor-conductor" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752938 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="openstack-network-exporter" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752961 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752969 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="sg-core" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752976 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9968b58c-c1a8-4491-b918-2c1cd8f56695" containerName="proxy-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752986 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8399afb1-fbd2-4ce0-b980-46b317d6cfee" containerName="rabbitmq" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.752996 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753003 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753012 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed4379e-5b70-40bf-bc3b-7fc8d557e0d1" containerName="ovn-northd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753020 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad676ea-95d4-483f-a7d7-574744376b19" containerName="kube-state-metrics" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753028 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753034 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a15e7b9-a2f9-41bb-bdf0-0d474eabb2ab" containerName="nova-metadata-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753043 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf69949e-5b98-4cb2-9ce6-f979da44c58e" containerName="mariadb-account-delete" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753050 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c6472a-6c69-45ff-b0c2-3bf66e0523f2" containerName="barbican-worker-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753056 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3fd963-7f73-4069-a993-1dfec6751c57" containerName="placement-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753066 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a92efa4-12b6-431d-8aa7-9baa545f7e07" containerName="nova-api-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753073 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee3f8a7-3e0a-4008-a7d7-2aeefc65c71c" containerName="barbican-api" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753082 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="534edd12-5e24-4d56-9f1a-944e1ed4a65b" containerName="barbican-keystone-listener-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753090 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60c122c-3364-4717-bc34-5a610c1a1ac8" containerName="nova-scheduler-scheduler" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753097 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cd2be3-d4e5-4ed7-80f9-54bc15ee3c50" containerName="memcached" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753106 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf78ba6-9116-42cf-8be5-809dd912646c" containerName="glance-httpd" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.753112 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d35d26e-c3f7-4597-80c6-60358f2d2c21" containerName="glance-log" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.754152 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.758464 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvmvb"] Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.883891 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-utilities\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.884245 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft84g\" (UniqueName: \"kubernetes.io/projected/808135b1-c46d-4ec9-a2b6-3e2abeab4655-kube-api-access-ft84g\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.884306 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-catalog-content\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.985440 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-utilities\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.985529 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft84g\" (UniqueName: \"kubernetes.io/projected/808135b1-c46d-4ec9-a2b6-3e2abeab4655-kube-api-access-ft84g\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.985593 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-catalog-content\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.985977 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-utilities\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:55 crc kubenswrapper[5028]: I1123 07:13:55.986093 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-catalog-content\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.005833 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft84g\" (UniqueName: \"kubernetes.io/projected/808135b1-c46d-4ec9-a2b6-3e2abeab4655-kube-api-access-ft84g\") pod \"redhat-operators-hvmvb\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.073938 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.526573 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hvmvb"] Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.754704 5028 generic.go:334] "Generic (PLEG): container finished" podID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerID="9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562" exitCode=0 Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.754747 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvmvb" event={"ID":"808135b1-c46d-4ec9-a2b6-3e2abeab4655","Type":"ContainerDied","Data":"9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562"} Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.754788 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvmvb" event={"ID":"808135b1-c46d-4ec9-a2b6-3e2abeab4655","Type":"ContainerStarted","Data":"b7a56ebad19a3ae3efd70a9c42337fe45043730da6feea96f95b47dc9808537d"} Nov 23 07:13:56 crc kubenswrapper[5028]: I1123 07:13:56.757865 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:13:57 crc kubenswrapper[5028]: E1123 07:13:57.916529 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:57 crc kubenswrapper[5028]: E1123 07:13:57.916867 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:14:13.916843531 +0000 UTC m=+1437.614248320 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:13:58 crc kubenswrapper[5028]: I1123 07:13:58.773451 5028 generic.go:334] "Generic (PLEG): container finished" podID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerID="4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2" exitCode=0 Nov 23 07:13:58 crc kubenswrapper[5028]: I1123 07:13:58.773505 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvmvb" event={"ID":"808135b1-c46d-4ec9-a2b6-3e2abeab4655","Type":"ContainerDied","Data":"4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2"} Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.150143 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.150294 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts podName:a00e6664-ab67-4532-9c12-89c6fa223993 nodeName:}" failed. No retries permitted until 2025-11-23 07:14:15.150264522 +0000 UTC m=+1438.847669381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts") pod "barbican5527-account-delete-7gzrg" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993") : configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.251447 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.251553 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.251554 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts podName:75ea620d-8ec5-47db-a758-11a1e3c9d605 nodeName:}" failed. No retries permitted until 2025-11-23 07:14:15.251527688 +0000 UTC m=+1438.948932507 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts") pod "novacell08b1e-account-delete-qctpn" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605") : configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.251649 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts podName:261d60cc-ee7d-463e-add8-4a4e8af392cd nodeName:}" failed. No retries permitted until 2025-11-23 07:14:15.2516359 +0000 UTC m=+1438.949040699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts") pod "glance8cc3-account-delete-rmsjp" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd") : configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.251819 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: E1123 07:13:59.251980 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts podName:c83094db-b0cb-4be4-a13b-de12d76e1fb0 nodeName:}" failed. No retries permitted until 2025-11-23 07:14:15.251936908 +0000 UTC m=+1438.949341777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts") pod "novaapi35e8-account-delete-8zqfb" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0") : configmap "openstack-scripts" not found Nov 23 07:13:59 crc kubenswrapper[5028]: I1123 07:13:59.784705 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvmvb" event={"ID":"808135b1-c46d-4ec9-a2b6-3e2abeab4655","Type":"ContainerStarted","Data":"01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96"} Nov 23 07:13:59 crc kubenswrapper[5028]: I1123 07:13:59.806354 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hvmvb" podStartSLOduration=2.372553176 podStartE2EDuration="4.80633929s" podCreationTimestamp="2025-11-23 07:13:55 +0000 UTC" firstStartedPulling="2025-11-23 07:13:56.757405139 +0000 UTC m=+1420.454809918" lastFinishedPulling="2025-11-23 07:13:59.191191213 +0000 UTC m=+1422.888596032" observedRunningTime="2025-11-23 07:13:59.80267752 +0000 UTC m=+1423.500082299" watchObservedRunningTime="2025-11-23 07:13:59.80633929 +0000 UTC m=+1423.503744069" Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.356282 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.356974 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.357346 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.357387 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.358620 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.360636 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.362757 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:14:00 crc kubenswrapper[5028]: E1123 07:14:00.362795 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:14:00 crc kubenswrapper[5028]: I1123 07:14:00.948133 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:14:00 crc kubenswrapper[5028]: I1123 07:14:00.948425 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.355665 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.356853 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.357360 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.357536 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.357627 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.359345 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.361179 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 23 07:14:05 crc kubenswrapper[5028]: E1123 07:14:05.362215 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-5cm8v" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:14:06 crc kubenswrapper[5028]: I1123 07:14:06.075042 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:14:06 crc kubenswrapper[5028]: I1123 07:14:06.075092 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:14:06 crc kubenswrapper[5028]: I1123 07:14:06.163276 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:14:06 crc kubenswrapper[5028]: I1123 07:14:06.898670 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:14:06 crc kubenswrapper[5028]: I1123 07:14:06.955202 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hvmvb"] Nov 23 07:14:07 crc kubenswrapper[5028]: I1123 07:14:07.873714 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-56d56d656c-8p7fn" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.752798 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cm8v_a59bba70-8fe2-4c63-9537-16dcf5ee2e0f/ovs-vswitchd/0.log" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.753994 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.881259 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-56d56d656c-8p7fn_f7d02425-cddb-44e9-983a-456b2dc4d6fe/neutron-api/0.log" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.881344 5028 generic.go:334] "Generic (PLEG): container finished" podID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerID="940f74017dba594cbc47228face62b55ed6f8064b06190a9c015b8bd33b0e3f6" exitCode=137 Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.881460 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d56d656c-8p7fn" event={"ID":"f7d02425-cddb-44e9-983a-456b2dc4d6fe","Type":"ContainerDied","Data":"940f74017dba594cbc47228face62b55ed6f8064b06190a9c015b8bd33b0e3f6"} Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.885272 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-5cm8v_a59bba70-8fe2-4c63-9537-16dcf5ee2e0f/ovs-vswitchd/0.log" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.887305 5028 generic.go:334] "Generic (PLEG): container finished" podID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" exitCode=137 Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.887435 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-5cm8v" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.887416 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerDied","Data":"acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6"} Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.887596 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-5cm8v" event={"ID":"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f","Type":"ContainerDied","Data":"8db741f4e8db3546b964be3fed0b6b97acca91e5e24e2c7ecfbe837030e8e22f"} Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.887622 5028 scope.go:117] "RemoveContainer" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.887876 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hvmvb" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="registry-server" containerID="cri-o://01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96" gracePeriod=2 Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907328 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-log\") pod \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907406 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-lib\") pod \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907444 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-etc-ovs\") pod \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907471 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-scripts\") pod \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907487 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-run\") pod \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907490 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-log" (OuterVolumeSpecName: "var-log") pod "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" (UID: "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907555 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" (UID: "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907577 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnt45\" (UniqueName: \"kubernetes.io/projected/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-kube-api-access-rnt45\") pod \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\" (UID: \"a59bba70-8fe2-4c63-9537-16dcf5ee2e0f\") " Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907924 5028 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-log\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907934 5028 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.907584 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-lib" (OuterVolumeSpecName: "var-lib") pod "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" (UID: "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.908008 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-run" (OuterVolumeSpecName: "var-run") pod "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" (UID: "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.909935 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-scripts" (OuterVolumeSpecName: "scripts") pod "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" (UID: "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.917391 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-kube-api-access-rnt45" (OuterVolumeSpecName: "kube-api-access-rnt45") pod "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" (UID: "a59bba70-8fe2-4c63-9537-16dcf5ee2e0f"). InnerVolumeSpecName "kube-api-access-rnt45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:08 crc kubenswrapper[5028]: I1123 07:14:08.930834 5028 scope.go:117] "RemoveContainer" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.009959 5028 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-lib\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.010040 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.010051 5028 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-var-run\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.010064 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnt45\" (UniqueName: \"kubernetes.io/projected/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f-kube-api-access-rnt45\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.011339 5028 scope.go:117] "RemoveContainer" containerID="3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.070090 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-56d56d656c-8p7fn_f7d02425-cddb-44e9-983a-456b2dc4d6fe/neutron-api/0.log" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.070181 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.120149 5028 scope.go:117] "RemoveContainer" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.124122 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6\": container with ID starting with acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6 not found: ID does not exist" containerID="acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.124174 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6"} err="failed to get container status \"acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6\": rpc error: code = NotFound desc = could not find container \"acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6\": container with ID starting with acbaab1cc92742de73e3fdc0b11acc19393981faae55256cedfcf1d9f0c05ce6 not found: ID does not exist" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.124205 5028 scope.go:117] "RemoveContainer" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.130294 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa\": container with ID starting with 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa not found: ID does not exist" containerID="87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.130340 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa"} err="failed to get container status \"87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa\": rpc error: code = NotFound desc = could not find container \"87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa\": container with ID starting with 87dfc4487c76da49d0d262cd3aeea423cb9496ef5e8bc036461f946c7d89f9aa not found: ID does not exist" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.130369 5028 scope.go:117] "RemoveContainer" containerID="3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19" Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.134227 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19\": container with ID starting with 3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19 not found: ID does not exist" containerID="3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.134274 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19"} err="failed to get container status \"3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19\": rpc error: code = NotFound desc = could not find container \"3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19\": container with ID starting with 3579f0d31365941b9059785319267274dcd5fe4935b9ff05d7754b8d11043d19 not found: ID does not exist" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215174 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-internal-tls-certs\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215213 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-combined-ca-bundle\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215290 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-public-tls-certs\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215326 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-config\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215359 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mgzn\" (UniqueName: \"kubernetes.io/projected/f7d02425-cddb-44e9-983a-456b2dc4d6fe-kube-api-access-7mgzn\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215410 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-ovndb-tls-certs\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.215448 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-httpd-config\") pod \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\" (UID: \"f7d02425-cddb-44e9-983a-456b2dc4d6fe\") " Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.240163 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.240562 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-5cm8v"] Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.246169 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d02425-cddb-44e9-983a-456b2dc4d6fe-kube-api-access-7mgzn" (OuterVolumeSpecName: "kube-api-access-7mgzn") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "kube-api-access-7mgzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.251874 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-5cm8v"] Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.276938 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.278783 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.281675 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.291797 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-config" (OuterVolumeSpecName: "config") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.307732 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f7d02425-cddb-44e9-983a-456b2dc4d6fe" (UID: "f7d02425-cddb-44e9-983a-456b2dc4d6fe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317667 5028 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317690 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317700 5028 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317708 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317719 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mgzn\" (UniqueName: \"kubernetes.io/projected/f7d02425-cddb-44e9-983a-456b2dc4d6fe-kube-api-access-7mgzn\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317729 5028 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.317737 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f7d02425-cddb-44e9-983a-456b2dc4d6fe-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.907411 5028 generic.go:334] "Generic (PLEG): container finished" podID="00067261-cd23-4c2f-8be4-24b01eaac580" containerID="1aafd0f3d20a9763f9c844dde7d914b68a8c9b6c1813f1ef8f09835c63225eb8" exitCode=137 Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.907527 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"1aafd0f3d20a9763f9c844dde7d914b68a8c9b6c1813f1ef8f09835c63225eb8"} Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.910449 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-56d56d656c-8p7fn_f7d02425-cddb-44e9-983a-456b2dc4d6fe/neutron-api/0.log" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.910583 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56d56d656c-8p7fn" event={"ID":"f7d02425-cddb-44e9-983a-456b2dc4d6fe","Type":"ContainerDied","Data":"df6bb19c129f226f28d28f8855ea9d5e76923a79a6784d7ce14fe385d5b16e81"} Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.910623 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56d56d656c-8p7fn" Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.910667 5028 scope.go:117] "RemoveContainer" containerID="5634973a7ea3c3061dced8c30254a7c5f72ea712c7865557fc5eee14be148b26" Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.927063 5028 projected.go:288] Couldn't get configMap openstack/swift-storage-config-data: configmap "swift-storage-config-data" not found Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.927097 5028 projected.go:263] Couldn't get secret openstack/swift-conf: secret "swift-conf" not found Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.927108 5028 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.927120 5028 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:14:09 crc kubenswrapper[5028]: E1123 07:14:09.927168 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift podName:00067261-cd23-4c2f-8be4-24b01eaac580 nodeName:}" failed. No retries permitted until 2025-11-23 07:14:41.927152102 +0000 UTC m=+1465.624556881 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift") pod "swift-storage-0" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580") : [configmap "swift-storage-config-data" not found, secret "swift-conf" not found, configmap "swift-ring-files" not found] Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.943180 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56d56d656c-8p7fn"] Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.953981 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56d56d656c-8p7fn"] Nov 23 07:14:09 crc kubenswrapper[5028]: I1123 07:14:09.961784 5028 scope.go:117] "RemoveContainer" containerID="940f74017dba594cbc47228face62b55ed6f8064b06190a9c015b8bd33b0e3f6" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.769183 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.912660 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.922411 5028 generic.go:334] "Generic (PLEG): container finished" podID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerID="01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96" exitCode=0 Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.922467 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hvmvb" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.922501 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvmvb" event={"ID":"808135b1-c46d-4ec9-a2b6-3e2abeab4655","Type":"ContainerDied","Data":"01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96"} Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.922542 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hvmvb" event={"ID":"808135b1-c46d-4ec9-a2b6-3e2abeab4655","Type":"ContainerDied","Data":"b7a56ebad19a3ae3efd70a9c42337fe45043730da6feea96f95b47dc9808537d"} Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.922562 5028 scope.go:117] "RemoveContainer" containerID="01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.933307 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00067261-cd23-4c2f-8be4-24b01eaac580","Type":"ContainerDied","Data":"1b57be1d0736719a2582d7aa1ff102592a68ff554c23c6adad468c94b01affc7"} Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.933666 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.946609 5028 scope.go:117] "RemoveContainer" containerID="4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.947554 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-catalog-content\") pod \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.947699 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-utilities\") pod \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.947897 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft84g\" (UniqueName: \"kubernetes.io/projected/808135b1-c46d-4ec9-a2b6-3e2abeab4655-kube-api-access-ft84g\") pod \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\" (UID: \"808135b1-c46d-4ec9-a2b6-3e2abeab4655\") " Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.950687 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-utilities" (OuterVolumeSpecName: "utilities") pod "808135b1-c46d-4ec9-a2b6-3e2abeab4655" (UID: "808135b1-c46d-4ec9-a2b6-3e2abeab4655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.954249 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808135b1-c46d-4ec9-a2b6-3e2abeab4655-kube-api-access-ft84g" (OuterVolumeSpecName: "kube-api-access-ft84g") pod "808135b1-c46d-4ec9-a2b6-3e2abeab4655" (UID: "808135b1-c46d-4ec9-a2b6-3e2abeab4655"). InnerVolumeSpecName "kube-api-access-ft84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:10 crc kubenswrapper[5028]: I1123 07:14:10.990440 5028 scope.go:117] "RemoveContainer" containerID="9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.006551 5028 scope.go:117] "RemoveContainer" containerID="01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96" Nov 23 07:14:11 crc kubenswrapper[5028]: E1123 07:14:11.006979 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96\": container with ID starting with 01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96 not found: ID does not exist" containerID="01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.007024 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96"} err="failed to get container status \"01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96\": rpc error: code = NotFound desc = could not find container \"01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96\": container with ID starting with 01feb85d82210387360105b275b41d9409431d716b003b46d89dd77bd5935f96 not found: ID does not exist" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.007054 5028 scope.go:117] "RemoveContainer" containerID="4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2" Nov 23 07:14:11 crc kubenswrapper[5028]: E1123 07:14:11.007537 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2\": container with ID starting with 4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2 not found: ID does not exist" containerID="4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.007573 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2"} err="failed to get container status \"4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2\": rpc error: code = NotFound desc = could not find container \"4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2\": container with ID starting with 4eee9de65aa9b90d526c7d029689623d79bb3d88e93ac7cda1ee750cc5bdcfa2 not found: ID does not exist" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.007590 5028 scope.go:117] "RemoveContainer" containerID="9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562" Nov 23 07:14:11 crc kubenswrapper[5028]: E1123 07:14:11.007919 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562\": container with ID starting with 9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562 not found: ID does not exist" containerID="9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.008043 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562"} err="failed to get container status \"9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562\": rpc error: code = NotFound desc = could not find container \"9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562\": container with ID starting with 9c627c38b6e6679a64e0d801c6ae989815a6d4d3ae513d003d86223a8af17562 not found: ID does not exist" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.008139 5028 scope.go:117] "RemoveContainer" containerID="1aafd0f3d20a9763f9c844dde7d914b68a8c9b6c1813f1ef8f09835c63225eb8" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.025168 5028 scope.go:117] "RemoveContainer" containerID="40974909c6c254ecbda3968c5a7f724e9828b2c9c3d631d0c9e90be6edb3e66d" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.041193 5028 scope.go:117] "RemoveContainer" containerID="2df62fa2c1e66a404dc3b2733961ec36cec00bf7f77ead77869b439ade6e92b1" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.049212 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"00067261-cd23-4c2f-8be4-24b01eaac580\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.049385 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-cache\") pod \"00067261-cd23-4c2f-8be4-24b01eaac580\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.049515 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-lock\") pod \"00067261-cd23-4c2f-8be4-24b01eaac580\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.049664 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvglf\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-kube-api-access-cvglf\") pod \"00067261-cd23-4c2f-8be4-24b01eaac580\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.049772 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") pod \"00067261-cd23-4c2f-8be4-24b01eaac580\" (UID: \"00067261-cd23-4c2f-8be4-24b01eaac580\") " Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.049971 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-lock" (OuterVolumeSpecName: "lock") pod "00067261-cd23-4c2f-8be4-24b01eaac580" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.050281 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-cache" (OuterVolumeSpecName: "cache") pod "00067261-cd23-4c2f-8be4-24b01eaac580" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.050380 5028 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-lock\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.050466 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft84g\" (UniqueName: \"kubernetes.io/projected/808135b1-c46d-4ec9-a2b6-3e2abeab4655-kube-api-access-ft84g\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.050546 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.052422 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "00067261-cd23-4c2f-8be4-24b01eaac580" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.053496 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-kube-api-access-cvglf" (OuterVolumeSpecName: "kube-api-access-cvglf") pod "00067261-cd23-4c2f-8be4-24b01eaac580" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580"). InnerVolumeSpecName "kube-api-access-cvglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.055094 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "00067261-cd23-4c2f-8be4-24b01eaac580" (UID: "00067261-cd23-4c2f-8be4-24b01eaac580"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.058338 5028 scope.go:117] "RemoveContainer" containerID="81a687eb71e007a1ac13feb65bbc68b1c3f2bf021519d85e91cd16f4d603b2f9" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.062319 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" path="/var/lib/kubelet/pods/a59bba70-8fe2-4c63-9537-16dcf5ee2e0f/volumes" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.063118 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" path="/var/lib/kubelet/pods/f7d02425-cddb-44e9-983a-456b2dc4d6fe/volumes" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.064667 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "808135b1-c46d-4ec9-a2b6-3e2abeab4655" (UID: "808135b1-c46d-4ec9-a2b6-3e2abeab4655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.076529 5028 scope.go:117] "RemoveContainer" containerID="3457d5419e6677ab415baff0a0c4f5bcfce5e9c08759e54ca60a97c9fe0f0b09" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.092772 5028 scope.go:117] "RemoveContainer" containerID="3758a8edb86f9cc2311dfbbc6420a20b7bb4456290271a1c277bd6e7daaf2d0b" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.109489 5028 scope.go:117] "RemoveContainer" containerID="9b2449d180857fa287537e9f2caa3d5b2c6ef6945336b39d4b2e2f1bba6f48e5" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.125043 5028 scope.go:117] "RemoveContainer" containerID="f7f3db02234e290a3cc4660fddd3b6ddc3c347130630535c71abc6cc72896ac8" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.147684 5028 scope.go:117] "RemoveContainer" containerID="34d6b82184b9d4d53e4cb202e9b47148b5aa74237f7bd04d23d2f1b5f8f45fee" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.155000 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808135b1-c46d-4ec9-a2b6-3e2abeab4655-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.155038 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.155049 5028 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00067261-cd23-4c2f-8be4-24b01eaac580-cache\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.155281 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvglf\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-kube-api-access-cvglf\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.155627 5028 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00067261-cd23-4c2f-8be4-24b01eaac580-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.169395 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.170905 5028 scope.go:117] "RemoveContainer" containerID="93de291167c5b5543ba1d794eabbe63b56604ecdcef6943568578c5bb4a29229" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.188963 5028 scope.go:117] "RemoveContainer" containerID="f33d8257c3344d1c41e045d276b50855aed400ece4dc05b5dbad0b7e7e645ec1" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.205192 5028 scope.go:117] "RemoveContainer" containerID="51780654d47b4748040c6f8ea75ab63207224c0aa1e348e73e20d5e202474d89" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.224157 5028 scope.go:117] "RemoveContainer" containerID="75bf1a6baebd3a81179a9db585726101ffd93df8b27cb4e5da1fb372c8b6ce89" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.241052 5028 scope.go:117] "RemoveContainer" containerID="2cff70916c4b86894d8212d9f22c1034cf16138344b62b7e140d3c041d91cee7" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.254394 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hvmvb"] Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.256580 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.260069 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hvmvb"] Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.270156 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.273796 5028 scope.go:117] "RemoveContainer" containerID="284fcc70f4d39f784940ff357d718a373de8c2e8881a64f54fa7a0acceaadf32" Nov 23 07:14:11 crc kubenswrapper[5028]: I1123 07:14:11.275338 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 23 07:14:13 crc kubenswrapper[5028]: I1123 07:14:13.065730 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" path="/var/lib/kubelet/pods/00067261-cd23-4c2f-8be4-24b01eaac580/volumes" Nov 23 07:14:13 crc kubenswrapper[5028]: I1123 07:14:13.071406 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" path="/var/lib/kubelet/pods/808135b1-c46d-4ec9-a2b6-3e2abeab4655/volumes" Nov 23 07:14:13 crc kubenswrapper[5028]: I1123 07:14:13.844520 5028 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod230d8024-5d83-4742-9bf9-77bc956dd4a9"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod230d8024-5d83-4742-9bf9-77bc956dd4a9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod230d8024_5d83_4742_9bf9_77bc956dd4a9.slice" Nov 23 07:14:14 crc kubenswrapper[5028]: E1123 07:14:14.001682 5028 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 23 07:14:14 crc kubenswrapper[5028]: E1123 07:14:14.001750 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts podName:4c18bacc-44ba-49cc-9c2b-9d06834a8cdd nodeName:}" failed. No retries permitted until 2025-11-23 07:14:46.001735655 +0000 UTC m=+1469.699140434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts") pod "cinder0e8f-account-delete-xlplw" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd") : configmap "openstack-scripts" not found Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.963619 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.971250 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.979511 5028 generic.go:334] "Generic (PLEG): container finished" podID="75ea620d-8ec5-47db-a758-11a1e3c9d605" containerID="c0e438b5feadcc60de0125b6ead3243fc9fdba982d8c527a3af16187c90ff94e" exitCode=137 Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.979600 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell08b1e-account-delete-qctpn" event={"ID":"75ea620d-8ec5-47db-a758-11a1e3c9d605","Type":"ContainerDied","Data":"c0e438b5feadcc60de0125b6ead3243fc9fdba982d8c527a3af16187c90ff94e"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.979650 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell08b1e-account-delete-qctpn" event={"ID":"75ea620d-8ec5-47db-a758-11a1e3c9d605","Type":"ContainerDied","Data":"ab960e602ff9a88d7cef581f8763cc60e1d2d310c671fab15a3bf9e6b1058457"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.979665 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab960e602ff9a88d7cef581f8763cc60e1d2d310c671fab15a3bf9e6b1058457" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.980172 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.981300 5028 generic.go:334] "Generic (PLEG): container finished" podID="c83094db-b0cb-4be4-a13b-de12d76e1fb0" containerID="72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3" exitCode=137 Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.981366 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi35e8-account-delete-8zqfb" event={"ID":"c83094db-b0cb-4be4-a13b-de12d76e1fb0","Type":"ContainerDied","Data":"72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.981430 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi35e8-account-delete-8zqfb" event={"ID":"c83094db-b0cb-4be4-a13b-de12d76e1fb0","Type":"ContainerDied","Data":"059b3d43d4834c327f5fc100298a2a88f412d2995e26f61f3c2481531d97d98a"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.981463 5028 scope.go:117] "RemoveContainer" containerID="72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.991146 5028 generic.go:334] "Generic (PLEG): container finished" podID="261d60cc-ee7d-463e-add8-4a4e8af392cd" containerID="cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5" exitCode=137 Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.991189 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance8cc3-account-delete-rmsjp" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.991249 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8cc3-account-delete-rmsjp" event={"ID":"261d60cc-ee7d-463e-add8-4a4e8af392cd","Type":"ContainerDied","Data":"cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.991283 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance8cc3-account-delete-rmsjp" event={"ID":"261d60cc-ee7d-463e-add8-4a4e8af392cd","Type":"ContainerDied","Data":"829814c5bd791c3940f9832e644ed882bf2d481d27976ce5d5c9a7099667130b"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.992079 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.994333 5028 generic.go:334] "Generic (PLEG): container finished" podID="a00e6664-ab67-4532-9c12-89c6fa223993" containerID="9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537" exitCode=137 Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.994388 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican5527-account-delete-7gzrg" Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.994419 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5527-account-delete-7gzrg" event={"ID":"a00e6664-ab67-4532-9c12-89c6fa223993","Type":"ContainerDied","Data":"9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537"} Nov 23 07:14:14 crc kubenswrapper[5028]: I1123 07:14:14.994453 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican5527-account-delete-7gzrg" event={"ID":"a00e6664-ab67-4532-9c12-89c6fa223993","Type":"ContainerDied","Data":"44abe3e25366bc6d2873e8645c61f50dc35f50c5053651c718e28b2667fbf09d"} Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.002140 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.005186 5028 generic.go:334] "Generic (PLEG): container finished" podID="4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" containerID="e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4" exitCode=137 Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.005240 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0e8f-account-delete-xlplw" event={"ID":"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd","Type":"ContainerDied","Data":"e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4"} Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.005270 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder0e8f-account-delete-xlplw" event={"ID":"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd","Type":"ContainerDied","Data":"dfe4eb387e3cd9102db4e0f498f40426d3e2ce26235f8e1766f9a8a01510eb54"} Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.016482 5028 scope.go:117] "RemoveContainer" containerID="72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3" Nov 23 07:14:15 crc kubenswrapper[5028]: E1123 07:14:15.018508 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3\": container with ID starting with 72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3 not found: ID does not exist" containerID="72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.018545 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3"} err="failed to get container status \"72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3\": rpc error: code = NotFound desc = could not find container \"72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3\": container with ID starting with 72423b1f263b857f85339c0c829f30fec658c5d6bfb709c955c69fea278736d3 not found: ID does not exist" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.018569 5028 scope.go:117] "RemoveContainer" containerID="cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.041845 5028 scope.go:117] "RemoveContainer" containerID="cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5" Nov 23 07:14:15 crc kubenswrapper[5028]: E1123 07:14:15.043252 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5\": container with ID starting with cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5 not found: ID does not exist" containerID="cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.043293 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5"} err="failed to get container status \"cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5\": rpc error: code = NotFound desc = could not find container \"cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5\": container with ID starting with cba3c843d02ba642a236b8070d0f091e1a1d2b8e7686f0b7b0349806af94f1d5 not found: ID does not exist" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.043317 5028 scope.go:117] "RemoveContainer" containerID="9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.065574 5028 scope.go:117] "RemoveContainer" containerID="9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537" Nov 23 07:14:15 crc kubenswrapper[5028]: E1123 07:14:15.065917 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537\": container with ID starting with 9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537 not found: ID does not exist" containerID="9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.065967 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537"} err="failed to get container status \"9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537\": rpc error: code = NotFound desc = could not find container \"9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537\": container with ID starting with 9ef802212f12442c955b393fa71b7532d3901d718e706e15c24604043975e537 not found: ID does not exist" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.065997 5028 scope.go:117] "RemoveContainer" containerID="e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.086264 5028 scope.go:117] "RemoveContainer" containerID="e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4" Nov 23 07:14:15 crc kubenswrapper[5028]: E1123 07:14:15.086609 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4\": container with ID starting with e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4 not found: ID does not exist" containerID="e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.086658 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4"} err="failed to get container status \"e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4\": rpc error: code = NotFound desc = could not find container \"e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4\": container with ID starting with e3c52e45c3f111fbcadb4fa758318c4020c6a938708f9409bc065c66714fecd4 not found: ID does not exist" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115222 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts\") pod \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115290 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts\") pod \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115347 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hz46\" (UniqueName: \"kubernetes.io/projected/c83094db-b0cb-4be4-a13b-de12d76e1fb0-kube-api-access-7hz46\") pod \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\" (UID: \"c83094db-b0cb-4be4-a13b-de12d76e1fb0\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115787 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfqnn\" (UniqueName: \"kubernetes.io/projected/75ea620d-8ec5-47db-a758-11a1e3c9d605-kube-api-access-pfqnn\") pod \"75ea620d-8ec5-47db-a758-11a1e3c9d605\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115829 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr46m\" (UniqueName: \"kubernetes.io/projected/261d60cc-ee7d-463e-add8-4a4e8af392cd-kube-api-access-pr46m\") pod \"261d60cc-ee7d-463e-add8-4a4e8af392cd\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115872 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts\") pod \"a00e6664-ab67-4532-9c12-89c6fa223993\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115898 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts\") pod \"75ea620d-8ec5-47db-a758-11a1e3c9d605\" (UID: \"75ea620d-8ec5-47db-a758-11a1e3c9d605\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115936 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrtd\" (UniqueName: \"kubernetes.io/projected/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-kube-api-access-rsrtd\") pod \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\" (UID: \"4c18bacc-44ba-49cc-9c2b-9d06834a8cdd\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.115983 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts\") pod \"261d60cc-ee7d-463e-add8-4a4e8af392cd\" (UID: \"261d60cc-ee7d-463e-add8-4a4e8af392cd\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116005 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnpr7\" (UniqueName: \"kubernetes.io/projected/a00e6664-ab67-4532-9c12-89c6fa223993-kube-api-access-tnpr7\") pod \"a00e6664-ab67-4532-9c12-89c6fa223993\" (UID: \"a00e6664-ab67-4532-9c12-89c6fa223993\") " Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116096 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116143 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c83094db-b0cb-4be4-a13b-de12d76e1fb0" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116387 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75ea620d-8ec5-47db-a758-11a1e3c9d605" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116726 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "261d60cc-ee7d-463e-add8-4a4e8af392cd" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116735 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116782 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c83094db-b0cb-4be4-a13b-de12d76e1fb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.116798 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ea620d-8ec5-47db-a758-11a1e3c9d605-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.117367 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a00e6664-ab67-4532-9c12-89c6fa223993" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.120531 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ea620d-8ec5-47db-a758-11a1e3c9d605-kube-api-access-pfqnn" (OuterVolumeSpecName: "kube-api-access-pfqnn") pod "75ea620d-8ec5-47db-a758-11a1e3c9d605" (UID: "75ea620d-8ec5-47db-a758-11a1e3c9d605"). InnerVolumeSpecName "kube-api-access-pfqnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.120634 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83094db-b0cb-4be4-a13b-de12d76e1fb0-kube-api-access-7hz46" (OuterVolumeSpecName: "kube-api-access-7hz46") pod "c83094db-b0cb-4be4-a13b-de12d76e1fb0" (UID: "c83094db-b0cb-4be4-a13b-de12d76e1fb0"). InnerVolumeSpecName "kube-api-access-7hz46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.120678 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00e6664-ab67-4532-9c12-89c6fa223993-kube-api-access-tnpr7" (OuterVolumeSpecName: "kube-api-access-tnpr7") pod "a00e6664-ab67-4532-9c12-89c6fa223993" (UID: "a00e6664-ab67-4532-9c12-89c6fa223993"). InnerVolumeSpecName "kube-api-access-tnpr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.120750 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261d60cc-ee7d-463e-add8-4a4e8af392cd-kube-api-access-pr46m" (OuterVolumeSpecName: "kube-api-access-pr46m") pod "261d60cc-ee7d-463e-add8-4a4e8af392cd" (UID: "261d60cc-ee7d-463e-add8-4a4e8af392cd"). InnerVolumeSpecName "kube-api-access-pr46m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.120968 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-kube-api-access-rsrtd" (OuterVolumeSpecName: "kube-api-access-rsrtd") pod "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" (UID: "4c18bacc-44ba-49cc-9c2b-9d06834a8cdd"). InnerVolumeSpecName "kube-api-access-rsrtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217514 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261d60cc-ee7d-463e-add8-4a4e8af392cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217552 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnpr7\" (UniqueName: \"kubernetes.io/projected/a00e6664-ab67-4532-9c12-89c6fa223993-kube-api-access-tnpr7\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217568 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hz46\" (UniqueName: \"kubernetes.io/projected/c83094db-b0cb-4be4-a13b-de12d76e1fb0-kube-api-access-7hz46\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217579 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfqnn\" (UniqueName: \"kubernetes.io/projected/75ea620d-8ec5-47db-a758-11a1e3c9d605-kube-api-access-pfqnn\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217591 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr46m\" (UniqueName: \"kubernetes.io/projected/261d60cc-ee7d-463e-add8-4a4e8af392cd-kube-api-access-pr46m\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217603 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00e6664-ab67-4532-9c12-89c6fa223993-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.217614 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrtd\" (UniqueName: \"kubernetes.io/projected/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd-kube-api-access-rsrtd\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.321902 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance8cc3-account-delete-rmsjp"] Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.329782 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance8cc3-account-delete-rmsjp"] Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.334282 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican5527-account-delete-7gzrg"] Nov 23 07:14:15 crc kubenswrapper[5028]: I1123 07:14:15.338109 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican5527-account-delete-7gzrg"] Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.017470 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder0e8f-account-delete-xlplw" Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.023278 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi35e8-account-delete-8zqfb" Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.027889 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell08b1e-account-delete-qctpn" Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.078775 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi35e8-account-delete-8zqfb"] Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.090794 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi35e8-account-delete-8zqfb"] Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.098414 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell08b1e-account-delete-qctpn"] Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.104215 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell08b1e-account-delete-qctpn"] Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.110647 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder0e8f-account-delete-xlplw"] Nov 23 07:14:16 crc kubenswrapper[5028]: I1123 07:14:16.115268 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder0e8f-account-delete-xlplw"] Nov 23 07:14:17 crc kubenswrapper[5028]: I1123 07:14:17.062739 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261d60cc-ee7d-463e-add8-4a4e8af392cd" path="/var/lib/kubelet/pods/261d60cc-ee7d-463e-add8-4a4e8af392cd/volumes" Nov 23 07:14:17 crc kubenswrapper[5028]: I1123 07:14:17.063337 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" path="/var/lib/kubelet/pods/4c18bacc-44ba-49cc-9c2b-9d06834a8cdd/volumes" Nov 23 07:14:17 crc kubenswrapper[5028]: I1123 07:14:17.063969 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ea620d-8ec5-47db-a758-11a1e3c9d605" path="/var/lib/kubelet/pods/75ea620d-8ec5-47db-a758-11a1e3c9d605/volumes" Nov 23 07:14:17 crc kubenswrapper[5028]: I1123 07:14:17.064790 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00e6664-ab67-4532-9c12-89c6fa223993" path="/var/lib/kubelet/pods/a00e6664-ab67-4532-9c12-89c6fa223993/volumes" Nov 23 07:14:17 crc kubenswrapper[5028]: I1123 07:14:17.067663 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83094db-b0cb-4be4-a13b-de12d76e1fb0" path="/var/lib/kubelet/pods/c83094db-b0cb-4be4-a13b-de12d76e1fb0/volumes" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.238247 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wnwm"] Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239162 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239177 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239191 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239199 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-server" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239218 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="swift-recon-cron" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239226 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="swift-recon-cron" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239240 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239248 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239261 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00e6664-ab67-4532-9c12-89c6fa223993" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239268 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00e6664-ab67-4532-9c12-89c6fa223993" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239278 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-api" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239286 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-api" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239298 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ea620d-8ec5-47db-a758-11a1e3c9d605" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239305 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ea620d-8ec5-47db-a758-11a1e3c9d605" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239314 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-reaper" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239322 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-reaper" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239335 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239344 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239353 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="extract-utilities" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239360 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="extract-utilities" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239374 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239381 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239393 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239401 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239412 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261d60cc-ee7d-463e-add8-4a4e8af392cd" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239419 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="261d60cc-ee7d-463e-add8-4a4e8af392cd" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239429 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239438 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239453 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239460 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239471 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-expirer" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239478 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-expirer" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239491 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="registry-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239498 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="registry-server" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239513 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-httpd" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239521 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-httpd" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239529 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239536 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-server" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239549 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239556 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239569 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-updater" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239576 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-updater" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239587 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239595 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239610 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="rsync" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239617 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="rsync" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239630 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server-init" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239637 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server-init" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239650 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-updater" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239658 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-updater" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239669 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83094db-b0cb-4be4-a13b-de12d76e1fb0" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239678 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83094db-b0cb-4be4-a13b-de12d76e1fb0" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239689 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239696 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-server" Nov 23 07:14:28 crc kubenswrapper[5028]: E1123 07:14:28.239707 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="extract-content" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239713 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="extract-content" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239864 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-httpd" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239878 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83094db-b0cb-4be4-a13b-de12d76e1fb0" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239890 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239901 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239913 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="rsync" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239925 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239933 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovsdb-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239962 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239979 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-updater" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.239994 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240004 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240019 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-updater" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240030 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d02425-cddb-44e9-983a-456b2dc4d6fe" containerName="neutron-api" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240044 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="object-expirer" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240053 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-replicator" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240062 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="261d60cc-ee7d-463e-add8-4a4e8af392cd" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240069 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59bba70-8fe2-4c63-9537-16dcf5ee2e0f" containerName="ovs-vswitchd" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240082 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240091 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="swift-recon-cron" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240101 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="account-reaper" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240111 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c18bacc-44ba-49cc-9c2b-9d06834a8cdd" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240125 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00e6664-ab67-4532-9c12-89c6fa223993" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240134 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00067261-cd23-4c2f-8be4-24b01eaac580" containerName="container-auditor" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240146 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="808135b1-c46d-4ec9-a2b6-3e2abeab4655" containerName="registry-server" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.240154 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ea620d-8ec5-47db-a758-11a1e3c9d605" containerName="mariadb-account-delete" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.241391 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.253854 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wnwm"] Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.321717 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-catalog-content\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.321773 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhftg\" (UniqueName: \"kubernetes.io/projected/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-kube-api-access-vhftg\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.321881 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-utilities\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.423654 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhftg\" (UniqueName: \"kubernetes.io/projected/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-kube-api-access-vhftg\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.423730 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-utilities\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.423820 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-catalog-content\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.424197 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-utilities\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.424263 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-catalog-content\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.445122 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhftg\" (UniqueName: \"kubernetes.io/projected/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-kube-api-access-vhftg\") pod \"certified-operators-5wnwm\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.564206 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:28 crc kubenswrapper[5028]: I1123 07:14:28.873599 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wnwm"] Nov 23 07:14:29 crc kubenswrapper[5028]: I1123 07:14:29.145292 5028 generic.go:334] "Generic (PLEG): container finished" podID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerID="828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e" exitCode=0 Nov 23 07:14:29 crc kubenswrapper[5028]: I1123 07:14:29.145362 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerDied","Data":"828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e"} Nov 23 07:14:29 crc kubenswrapper[5028]: I1123 07:14:29.145606 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerStarted","Data":"a4da62615204442e8c606c8d38736f389fcfa37733968b61547b97266631c667"} Nov 23 07:14:30 crc kubenswrapper[5028]: I1123 07:14:30.155866 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerStarted","Data":"954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73"} Nov 23 07:14:30 crc kubenswrapper[5028]: I1123 07:14:30.946856 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:14:30 crc kubenswrapper[5028]: I1123 07:14:30.947001 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:14:31 crc kubenswrapper[5028]: I1123 07:14:31.167905 5028 generic.go:334] "Generic (PLEG): container finished" podID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerID="954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73" exitCode=0 Nov 23 07:14:31 crc kubenswrapper[5028]: I1123 07:14:31.167971 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerDied","Data":"954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73"} Nov 23 07:14:32 crc kubenswrapper[5028]: I1123 07:14:32.178313 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerStarted","Data":"e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3"} Nov 23 07:14:38 crc kubenswrapper[5028]: I1123 07:14:38.564682 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:38 crc kubenswrapper[5028]: I1123 07:14:38.565465 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:38 crc kubenswrapper[5028]: I1123 07:14:38.605923 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:38 crc kubenswrapper[5028]: I1123 07:14:38.638845 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wnwm" podStartSLOduration=8.173064493 podStartE2EDuration="10.638812317s" podCreationTimestamp="2025-11-23 07:14:28 +0000 UTC" firstStartedPulling="2025-11-23 07:14:29.146721464 +0000 UTC m=+1452.844126253" lastFinishedPulling="2025-11-23 07:14:31.612469298 +0000 UTC m=+1455.309874077" observedRunningTime="2025-11-23 07:14:32.201171069 +0000 UTC m=+1455.898575868" watchObservedRunningTime="2025-11-23 07:14:38.638812317 +0000 UTC m=+1462.336217136" Nov 23 07:14:39 crc kubenswrapper[5028]: I1123 07:14:39.288007 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:39 crc kubenswrapper[5028]: I1123 07:14:39.334442 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wnwm"] Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.261778 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wnwm" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="registry-server" containerID="cri-o://e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3" gracePeriod=2 Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.640101 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.808304 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-utilities\") pod \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.808363 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-catalog-content\") pod \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.808417 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhftg\" (UniqueName: \"kubernetes.io/projected/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-kube-api-access-vhftg\") pod \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\" (UID: \"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5\") " Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.809234 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-utilities" (OuterVolumeSpecName: "utilities") pod "fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" (UID: "fbf00fc5-6ab9-49fa-9f61-48e90dad44e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.814191 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-kube-api-access-vhftg" (OuterVolumeSpecName: "kube-api-access-vhftg") pod "fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" (UID: "fbf00fc5-6ab9-49fa-9f61-48e90dad44e5"). InnerVolumeSpecName "kube-api-access-vhftg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.862733 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" (UID: "fbf00fc5-6ab9-49fa-9f61-48e90dad44e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.910127 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.910190 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:41 crc kubenswrapper[5028]: I1123 07:14:41.910219 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhftg\" (UniqueName: \"kubernetes.io/projected/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5-kube-api-access-vhftg\") on node \"crc\" DevicePath \"\"" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.277436 5028 generic.go:334] "Generic (PLEG): container finished" podID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerID="e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3" exitCode=0 Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.277495 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerDied","Data":"e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3"} Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.277517 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wnwm" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.277542 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wnwm" event={"ID":"fbf00fc5-6ab9-49fa-9f61-48e90dad44e5","Type":"ContainerDied","Data":"a4da62615204442e8c606c8d38736f389fcfa37733968b61547b97266631c667"} Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.277575 5028 scope.go:117] "RemoveContainer" containerID="e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.316455 5028 scope.go:117] "RemoveContainer" containerID="954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.321546 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wnwm"] Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.328526 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wnwm"] Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.337073 5028 scope.go:117] "RemoveContainer" containerID="828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.380295 5028 scope.go:117] "RemoveContainer" containerID="e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3" Nov 23 07:14:42 crc kubenswrapper[5028]: E1123 07:14:42.380831 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3\": container with ID starting with e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3 not found: ID does not exist" containerID="e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.380902 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3"} err="failed to get container status \"e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3\": rpc error: code = NotFound desc = could not find container \"e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3\": container with ID starting with e68288b91cfe21289723c6f5695baf4c5529dfc20acf408394e5ba6901b564a3 not found: ID does not exist" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.380941 5028 scope.go:117] "RemoveContainer" containerID="954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73" Nov 23 07:14:42 crc kubenswrapper[5028]: E1123 07:14:42.381572 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73\": container with ID starting with 954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73 not found: ID does not exist" containerID="954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.381603 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73"} err="failed to get container status \"954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73\": rpc error: code = NotFound desc = could not find container \"954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73\": container with ID starting with 954f5048d97f13eaa247d083a66d7d8db449817976801a44b9f8edb3579c4d73 not found: ID does not exist" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.381624 5028 scope.go:117] "RemoveContainer" containerID="828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e" Nov 23 07:14:42 crc kubenswrapper[5028]: E1123 07:14:42.381889 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e\": container with ID starting with 828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e not found: ID does not exist" containerID="828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e" Nov 23 07:14:42 crc kubenswrapper[5028]: I1123 07:14:42.381909 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e"} err="failed to get container status \"828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e\": rpc error: code = NotFound desc = could not find container \"828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e\": container with ID starting with 828189342d6b4ca5af23f74bc48f8c0f77d58e15e61f883d3cd7bd3ea0d5d51e not found: ID does not exist" Nov 23 07:14:43 crc kubenswrapper[5028]: I1123 07:14:43.063801 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" path="/var/lib/kubelet/pods/fbf00fc5-6ab9-49fa-9f61-48e90dad44e5/volumes" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.528192 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wrx6c"] Nov 23 07:14:53 crc kubenswrapper[5028]: E1123 07:14:53.529030 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="extract-utilities" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.529045 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="extract-utilities" Nov 23 07:14:53 crc kubenswrapper[5028]: E1123 07:14:53.529069 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="registry-server" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.529077 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="registry-server" Nov 23 07:14:53 crc kubenswrapper[5028]: E1123 07:14:53.529101 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="extract-content" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.529108 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="extract-content" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.529277 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf00fc5-6ab9-49fa-9f61-48e90dad44e5" containerName="registry-server" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.530400 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.540519 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrx6c"] Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.577234 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-utilities\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.577304 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-catalog-content\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.577449 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffr97\" (UniqueName: \"kubernetes.io/projected/65eaff43-60e7-4b2a-980c-4a306f9a9a26-kube-api-access-ffr97\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.678604 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffr97\" (UniqueName: \"kubernetes.io/projected/65eaff43-60e7-4b2a-980c-4a306f9a9a26-kube-api-access-ffr97\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.678729 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-utilities\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.678770 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-catalog-content\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.679566 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-utilities\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.679625 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-catalog-content\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.704026 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffr97\" (UniqueName: \"kubernetes.io/projected/65eaff43-60e7-4b2a-980c-4a306f9a9a26-kube-api-access-ffr97\") pod \"community-operators-wrx6c\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:53 crc kubenswrapper[5028]: I1123 07:14:53.848679 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:14:54 crc kubenswrapper[5028]: I1123 07:14:54.120387 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrx6c"] Nov 23 07:14:54 crc kubenswrapper[5028]: I1123 07:14:54.398918 5028 generic.go:334] "Generic (PLEG): container finished" podID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerID="03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356" exitCode=0 Nov 23 07:14:54 crc kubenswrapper[5028]: I1123 07:14:54.398988 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerDied","Data":"03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356"} Nov 23 07:14:54 crc kubenswrapper[5028]: I1123 07:14:54.399045 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerStarted","Data":"698ac883621d5153bad8a787abbbccf3040e4e9d1d70143a5c7395fea39538f8"} Nov 23 07:14:55 crc kubenswrapper[5028]: I1123 07:14:55.409502 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerStarted","Data":"8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf"} Nov 23 07:14:56 crc kubenswrapper[5028]: I1123 07:14:56.419158 5028 generic.go:334] "Generic (PLEG): container finished" podID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerID="8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf" exitCode=0 Nov 23 07:14:56 crc kubenswrapper[5028]: I1123 07:14:56.419276 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerDied","Data":"8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf"} Nov 23 07:14:57 crc kubenswrapper[5028]: I1123 07:14:57.433356 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerStarted","Data":"271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d"} Nov 23 07:14:57 crc kubenswrapper[5028]: I1123 07:14:57.460453 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wrx6c" podStartSLOduration=2.037381491 podStartE2EDuration="4.460434682s" podCreationTimestamp="2025-11-23 07:14:53 +0000 UTC" firstStartedPulling="2025-11-23 07:14:54.400788629 +0000 UTC m=+1478.098193408" lastFinishedPulling="2025-11-23 07:14:56.82384182 +0000 UTC m=+1480.521246599" observedRunningTime="2025-11-23 07:14:57.457233524 +0000 UTC m=+1481.154638353" watchObservedRunningTime="2025-11-23 07:14:57.460434682 +0000 UTC m=+1481.157839471" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.154351 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t"] Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.155444 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.157900 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.157958 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.165534 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t"] Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.165984 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efdbebd-cdbd-429a-bb93-c1983c87b38c-secret-volume\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.166153 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efdbebd-cdbd-429a-bb93-c1983c87b38c-config-volume\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.166200 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49dsq\" (UniqueName: \"kubernetes.io/projected/2efdbebd-cdbd-429a-bb93-c1983c87b38c-kube-api-access-49dsq\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.267093 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efdbebd-cdbd-429a-bb93-c1983c87b38c-secret-volume\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.267376 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efdbebd-cdbd-429a-bb93-c1983c87b38c-config-volume\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.267393 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49dsq\" (UniqueName: \"kubernetes.io/projected/2efdbebd-cdbd-429a-bb93-c1983c87b38c-kube-api-access-49dsq\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.268808 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efdbebd-cdbd-429a-bb93-c1983c87b38c-config-volume\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.273886 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efdbebd-cdbd-429a-bb93-c1983c87b38c-secret-volume\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.283293 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49dsq\" (UniqueName: \"kubernetes.io/projected/2efdbebd-cdbd-429a-bb93-c1983c87b38c-kube-api-access-49dsq\") pod \"collect-profiles-29398035-rck8t\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.478170 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.866851 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t"] Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.946176 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.946283 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.946445 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.947132 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9f3c8aba32cb3954ec9074865809a85ea7ad623b69b523ea2d4be3943bce523"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:15:00 crc kubenswrapper[5028]: I1123 07:15:00.947195 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://a9f3c8aba32cb3954ec9074865809a85ea7ad623b69b523ea2d4be3943bce523" gracePeriod=600 Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.468382 5028 generic.go:334] "Generic (PLEG): container finished" podID="2efdbebd-cdbd-429a-bb93-c1983c87b38c" containerID="6af93a82f17c6f49cf96b012bceaabf7f525eaf7fb276c031c37810456fa3310" exitCode=0 Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.468469 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" event={"ID":"2efdbebd-cdbd-429a-bb93-c1983c87b38c","Type":"ContainerDied","Data":"6af93a82f17c6f49cf96b012bceaabf7f525eaf7fb276c031c37810456fa3310"} Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.468781 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" event={"ID":"2efdbebd-cdbd-429a-bb93-c1983c87b38c","Type":"ContainerStarted","Data":"a9679f72ba6d79b9b9d07e35ace831bb78dde47338f95bb9768597d252646db1"} Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.477132 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="a9f3c8aba32cb3954ec9074865809a85ea7ad623b69b523ea2d4be3943bce523" exitCode=0 Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.477172 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"a9f3c8aba32cb3954ec9074865809a85ea7ad623b69b523ea2d4be3943bce523"} Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.477224 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803"} Nov 23 07:15:01 crc kubenswrapper[5028]: I1123 07:15:01.477251 5028 scope.go:117] "RemoveContainer" containerID="fad5b43d5150bb13c86e0639fa99a9b8f8c637943306815b8e96f42f58e277f1" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.729596 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.809707 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efdbebd-cdbd-429a-bb93-c1983c87b38c-secret-volume\") pod \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.809789 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efdbebd-cdbd-429a-bb93-c1983c87b38c-config-volume\") pod \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.809811 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49dsq\" (UniqueName: \"kubernetes.io/projected/2efdbebd-cdbd-429a-bb93-c1983c87b38c-kube-api-access-49dsq\") pod \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\" (UID: \"2efdbebd-cdbd-429a-bb93-c1983c87b38c\") " Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.810675 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2efdbebd-cdbd-429a-bb93-c1983c87b38c-config-volume" (OuterVolumeSpecName: "config-volume") pod "2efdbebd-cdbd-429a-bb93-c1983c87b38c" (UID: "2efdbebd-cdbd-429a-bb93-c1983c87b38c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.814760 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2efdbebd-cdbd-429a-bb93-c1983c87b38c-kube-api-access-49dsq" (OuterVolumeSpecName: "kube-api-access-49dsq") pod "2efdbebd-cdbd-429a-bb93-c1983c87b38c" (UID: "2efdbebd-cdbd-429a-bb93-c1983c87b38c"). InnerVolumeSpecName "kube-api-access-49dsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.814860 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2efdbebd-cdbd-429a-bb93-c1983c87b38c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2efdbebd-cdbd-429a-bb93-c1983c87b38c" (UID: "2efdbebd-cdbd-429a-bb93-c1983c87b38c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.911804 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2efdbebd-cdbd-429a-bb93-c1983c87b38c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.912080 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49dsq\" (UniqueName: \"kubernetes.io/projected/2efdbebd-cdbd-429a-bb93-c1983c87b38c-kube-api-access-49dsq\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:02 crc kubenswrapper[5028]: I1123 07:15:02.912143 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2efdbebd-cdbd-429a-bb93-c1983c87b38c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:03 crc kubenswrapper[5028]: I1123 07:15:03.497185 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" event={"ID":"2efdbebd-cdbd-429a-bb93-c1983c87b38c","Type":"ContainerDied","Data":"a9679f72ba6d79b9b9d07e35ace831bb78dde47338f95bb9768597d252646db1"} Nov 23 07:15:03 crc kubenswrapper[5028]: I1123 07:15:03.497222 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9679f72ba6d79b9b9d07e35ace831bb78dde47338f95bb9768597d252646db1" Nov 23 07:15:03 crc kubenswrapper[5028]: I1123 07:15:03.497272 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t" Nov 23 07:15:03 crc kubenswrapper[5028]: I1123 07:15:03.848987 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:15:03 crc kubenswrapper[5028]: I1123 07:15:03.849380 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:15:03 crc kubenswrapper[5028]: I1123 07:15:03.892795 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:15:04 crc kubenswrapper[5028]: I1123 07:15:04.555507 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:15:04 crc kubenswrapper[5028]: I1123 07:15:04.604860 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrx6c"] Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.523453 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wrx6c" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="registry-server" containerID="cri-o://271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d" gracePeriod=2 Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.951591 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.971035 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-utilities\") pod \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.971094 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-catalog-content\") pod \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.971172 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffr97\" (UniqueName: \"kubernetes.io/projected/65eaff43-60e7-4b2a-980c-4a306f9a9a26-kube-api-access-ffr97\") pod \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\" (UID: \"65eaff43-60e7-4b2a-980c-4a306f9a9a26\") " Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.973036 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-utilities" (OuterVolumeSpecName: "utilities") pod "65eaff43-60e7-4b2a-980c-4a306f9a9a26" (UID: "65eaff43-60e7-4b2a-980c-4a306f9a9a26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:15:06 crc kubenswrapper[5028]: I1123 07:15:06.978228 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65eaff43-60e7-4b2a-980c-4a306f9a9a26-kube-api-access-ffr97" (OuterVolumeSpecName: "kube-api-access-ffr97") pod "65eaff43-60e7-4b2a-980c-4a306f9a9a26" (UID: "65eaff43-60e7-4b2a-980c-4a306f9a9a26"). InnerVolumeSpecName "kube-api-access-ffr97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.036213 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65eaff43-60e7-4b2a-980c-4a306f9a9a26" (UID: "65eaff43-60e7-4b2a-980c-4a306f9a9a26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.073010 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.073042 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65eaff43-60e7-4b2a-980c-4a306f9a9a26-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.073052 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffr97\" (UniqueName: \"kubernetes.io/projected/65eaff43-60e7-4b2a-980c-4a306f9a9a26-kube-api-access-ffr97\") on node \"crc\" DevicePath \"\"" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.536734 5028 generic.go:334] "Generic (PLEG): container finished" podID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerID="271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d" exitCode=0 Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.536799 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrx6c" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.536857 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerDied","Data":"271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d"} Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.538163 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrx6c" event={"ID":"65eaff43-60e7-4b2a-980c-4a306f9a9a26","Type":"ContainerDied","Data":"698ac883621d5153bad8a787abbbccf3040e4e9d1d70143a5c7395fea39538f8"} Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.538198 5028 scope.go:117] "RemoveContainer" containerID="271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.569444 5028 scope.go:117] "RemoveContainer" containerID="8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.571821 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrx6c"] Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.583776 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wrx6c"] Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.605735 5028 scope.go:117] "RemoveContainer" containerID="03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.635130 5028 scope.go:117] "RemoveContainer" containerID="271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d" Nov 23 07:15:07 crc kubenswrapper[5028]: E1123 07:15:07.635692 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d\": container with ID starting with 271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d not found: ID does not exist" containerID="271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.635755 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d"} err="failed to get container status \"271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d\": rpc error: code = NotFound desc = could not find container \"271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d\": container with ID starting with 271203bec1621b28bcb29b4fce006ca353394f1dae81b2156826fe96fd3a5f8d not found: ID does not exist" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.635797 5028 scope.go:117] "RemoveContainer" containerID="8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf" Nov 23 07:15:07 crc kubenswrapper[5028]: E1123 07:15:07.636202 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf\": container with ID starting with 8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf not found: ID does not exist" containerID="8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.636249 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf"} err="failed to get container status \"8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf\": rpc error: code = NotFound desc = could not find container \"8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf\": container with ID starting with 8df5b2c6c93a62ab6379976ae647f6abc004c1187e5ccb80c945ef562058c5bf not found: ID does not exist" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.636274 5028 scope.go:117] "RemoveContainer" containerID="03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356" Nov 23 07:15:07 crc kubenswrapper[5028]: E1123 07:15:07.636648 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356\": container with ID starting with 03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356 not found: ID does not exist" containerID="03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356" Nov 23 07:15:07 crc kubenswrapper[5028]: I1123 07:15:07.636678 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356"} err="failed to get container status \"03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356\": rpc error: code = NotFound desc = could not find container \"03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356\": container with ID starting with 03072911b869a9d8730b484a68cdf1780032840e208434781e387e94ed30a356 not found: ID does not exist" Nov 23 07:15:09 crc kubenswrapper[5028]: I1123 07:15:09.061582 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" path="/var/lib/kubelet/pods/65eaff43-60e7-4b2a-980c-4a306f9a9a26/volumes" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.520631 5028 scope.go:117] "RemoveContainer" containerID="46c8fdaa953f4356184a9b835cf0e50171a5139cfaf43a971d468ac33651df8f" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.552260 5028 scope.go:117] "RemoveContainer" containerID="487521214eeb02d9de53ad55fcef5dea8d15920f39d4c5626968c7c0a735b745" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.573695 5028 scope.go:117] "RemoveContainer" containerID="7cfb30b9c3f18728879f8478ba9936131bc7e83d3a5d60433da19a0ecea1feb7" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.589437 5028 scope.go:117] "RemoveContainer" containerID="cac29302621a7fbc9644c1dac717103cb9daabdefd6fe800f446150bd88ee23b" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.615518 5028 scope.go:117] "RemoveContainer" containerID="0ae8d8e3dd53a9174fc5d55f7293f9f537efa92b9a5ea3f89fa1c1f4fa8c615e" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.648595 5028 scope.go:117] "RemoveContainer" containerID="12af1423a295c6f478c8785d5aa48a368910a459421fea55e61284425915587f" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.671631 5028 scope.go:117] "RemoveContainer" containerID="2386705abd6eaf35538b22b345a492d38b6d23697b55057e0679f33a8700fefd" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.705790 5028 scope.go:117] "RemoveContainer" containerID="59ca6f41da174e8068bb5c1fa9541bd862fe411625339888ca7f7ff2ab63b9ef" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.724140 5028 scope.go:117] "RemoveContainer" containerID="9f197175460f75323e366d22e7ffbd1000cbc5540ba72dc6c4f24628ee23a4c6" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.739309 5028 scope.go:117] "RemoveContainer" containerID="481a7c1912530aa19976454094bc1eafee12f607b50a187d2973b114aebc1b12" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.761703 5028 scope.go:117] "RemoveContainer" containerID="e94f774274e49deacd01adc7ca38ac5ec13f56f9fa03fa345f51ea85f097dc30" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.779475 5028 scope.go:117] "RemoveContainer" containerID="241f9f536ab38394348af160013fb0390313172f6bde249dc5d6d02b4ba10fb4" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.803329 5028 scope.go:117] "RemoveContainer" containerID="d5c510e8694c8063ad9c3383683fd4c6601677b404ce17dce2a337a4142de95c" Nov 23 07:15:20 crc kubenswrapper[5028]: I1123 07:15:20.824139 5028 scope.go:117] "RemoveContainer" containerID="9d025670f0758890925706d019dec0c58ba40c04e9ce8ae615b3bedf7b0255db" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.015779 5028 scope.go:117] "RemoveContainer" containerID="08d729c3c5943787d8d97cf784800554db6bc9ac7a28eebd0fbec742d8db6711" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.058087 5028 scope.go:117] "RemoveContainer" containerID="8cbb2a112cb05c3a4ff0551cbc9cb7a30fe09d0a7e86376732f2ee0979b8a58a" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.081519 5028 scope.go:117] "RemoveContainer" containerID="3bb8b33fe6e5b4407f5b6a3e5f6c662c7dc73f42b9338e7c86cf692773a77030" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.117270 5028 scope.go:117] "RemoveContainer" containerID="630b0afa1537ce69941a71ded3e76c91b4b8f7e406c4cce4564688b1413fee1a" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.165120 5028 scope.go:117] "RemoveContainer" containerID="88644cdbb4374bf4c073a29fd16593f8d69ed669e05515272f3ea6f3db4edd4c" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.190425 5028 scope.go:117] "RemoveContainer" containerID="1f51e7c906609c0ffde79dcfa255aa9e3cf5469f48ac52a4d77c96aaa45cadd1" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.215531 5028 scope.go:117] "RemoveContainer" containerID="88d19b7b033e2e13685a9d25e3e33cc4ed358eed5376265f13a413a01fc54c3f" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.251232 5028 scope.go:117] "RemoveContainer" containerID="9023b2e6e2d8f4408363912b73a54f2f4c4dd8f5701bb322b8691b1effd16944" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.288271 5028 scope.go:117] "RemoveContainer" containerID="ae5b287a41e57ddc9ba6125b4534965ed7c6b5a24e3cb68dd46326b1643008bc" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.319229 5028 scope.go:117] "RemoveContainer" containerID="4b4b6cd6d812dd10bca7b5bf605672ef58794edc00c2fe75180902fd552e0d18" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.341093 5028 scope.go:117] "RemoveContainer" containerID="da3485b8f367c453a3b55731c8c85e776b41135de878e8be472158c1fc4d40f3" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.368237 5028 scope.go:117] "RemoveContainer" containerID="fb0c136a883e5097dee7d61f52e18b211b1aa04f3990800f6dee2cb5309eb9a9" Nov 23 07:16:21 crc kubenswrapper[5028]: I1123 07:16:21.415258 5028 scope.go:117] "RemoveContainer" containerID="27bf9b397e896cd8bc8fd40b3f85b6d290c89b1f88de3910d80025bbd49ee0f4" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.674944 5028 scope.go:117] "RemoveContainer" containerID="489d955f8bae0d4ecbdf8344baa829141870b26ffc1a299b4b9998131b5d5ea4" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.722293 5028 scope.go:117] "RemoveContainer" containerID="48d4e75c14dedc9c49dfc74f70328222f45b05342aa264bb83c91e776ffe2617" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.745759 5028 scope.go:117] "RemoveContainer" containerID="477afbc1bdc34977d24d5e8925956af63ee650a2490e70a160d5ae0e9bae6c9b" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.779605 5028 scope.go:117] "RemoveContainer" containerID="838f759bd47bf9f7518789583273dbebd929bd5875966b81a9e459143473d685" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.805157 5028 scope.go:117] "RemoveContainer" containerID="2615070d3b8815e54e0b7edb40162492d505ba0ce3f8900f378b4e7fd3cf11d2" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.831750 5028 scope.go:117] "RemoveContainer" containerID="7a3f640c803b7aede08da9084eb7cbdccf31faa94c005b15c074e8234995862a" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.852572 5028 scope.go:117] "RemoveContainer" containerID="5c6c526d79aa0e5f2a9c03d7440a3625e79fc7e6164cb907a3f55aad201ead50" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.874086 5028 scope.go:117] "RemoveContainer" containerID="cd2681dcfce8f7732a81760ac09f887e3912267a3c7661c4353b9574b37422dd" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.891853 5028 scope.go:117] "RemoveContainer" containerID="f9e28eb9d85cec0a94161344fe470187e060552cb2ba5add91964b16fd771169" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.914742 5028 scope.go:117] "RemoveContainer" containerID="9f6f77c785c513f419c76be87f7280a8d9724d0b20cf91e0391a4a8a4591486d" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.950747 5028 scope.go:117] "RemoveContainer" containerID="1f403dbae9bd40294ebff3aa78450948e835cd967861b28e083f238438f08caa" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.975438 5028 scope.go:117] "RemoveContainer" containerID="2a139570e27e4d3f8e409cd6102e980a7101788faa70c851a25210cd2a440e17" Nov 23 07:17:21 crc kubenswrapper[5028]: I1123 07:17:21.996358 5028 scope.go:117] "RemoveContainer" containerID="f218f774db4ceb31b910144408d32162ae8bbf909051deba432e8f2bc15e5458" Nov 23 07:17:22 crc kubenswrapper[5028]: I1123 07:17:22.020360 5028 scope.go:117] "RemoveContainer" containerID="c2313efa3dc196f747bb767207978f9b4f70c79524bd34c535de3bdf4ae01e56" Nov 23 07:17:22 crc kubenswrapper[5028]: I1123 07:17:22.043023 5028 scope.go:117] "RemoveContainer" containerID="7e3d6583f81183730daa6f3092393a053d2d9a7c825fd26714e87f123ad7e913" Nov 23 07:17:22 crc kubenswrapper[5028]: I1123 07:17:22.064107 5028 scope.go:117] "RemoveContainer" containerID="3131435438b601514b90e4b84200190dc90bb9ec48c9c5011919f75f86066438" Nov 23 07:17:22 crc kubenswrapper[5028]: I1123 07:17:22.082723 5028 scope.go:117] "RemoveContainer" containerID="546cfbcd2ed04dac8ebc09498036b695ca889314b9fb5b598fd0f2bea9fb808a" Nov 23 07:17:30 crc kubenswrapper[5028]: I1123 07:17:30.946797 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:17:30 crc kubenswrapper[5028]: I1123 07:17:30.947549 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:18:00 crc kubenswrapper[5028]: I1123 07:18:00.947092 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:18:00 crc kubenswrapper[5028]: I1123 07:18:00.947699 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.245328 5028 scope.go:117] "RemoveContainer" containerID="4217f079f64973f5810531f58f78f10d09ef89a5b6288de55515155677e95e0a" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.275392 5028 scope.go:117] "RemoveContainer" containerID="d347353db33af1d6ba4d84c69e24716a8023002479762dc4dc845b00b4cdd85d" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.296318 5028 scope.go:117] "RemoveContainer" containerID="4992ef2262000f57f2dd5976bb7cab491e707987e342db67b34c401e28c4687a" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.311365 5028 scope.go:117] "RemoveContainer" containerID="caedb149b29e09c90cd8368ec2f32d3b903eaf6fed6262d942c66c394ec827bb" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.370167 5028 scope.go:117] "RemoveContainer" containerID="552e38d88689ec46fa8727e87ef0816f6ef787b7b5c68938daf09f72f26e9a9f" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.424999 5028 scope.go:117] "RemoveContainer" containerID="6f3ee00baa666ce4b0a62f5838f506fd3532211c0c7576d534a5dbad1491e360" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.449603 5028 scope.go:117] "RemoveContainer" containerID="8074037e96098a8a2eef2221bb33919ab66ce6682e6fcf1f6adb64b678e2bbed" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.474095 5028 scope.go:117] "RemoveContainer" containerID="22b4e39ada1d711841f2d5ab6596639fc374030997ad8dac32ca318f11a99634" Nov 23 07:18:22 crc kubenswrapper[5028]: I1123 07:18:22.510298 5028 scope.go:117] "RemoveContainer" containerID="34a6dec9846a2576b2a4998da3347057d7253ffff2a03d02cf24883c4bb89960" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.662047 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7c9"] Nov 23 07:18:30 crc kubenswrapper[5028]: E1123 07:18:30.664067 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="registry-server" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.664093 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="registry-server" Nov 23 07:18:30 crc kubenswrapper[5028]: E1123 07:18:30.664129 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2efdbebd-cdbd-429a-bb93-c1983c87b38c" containerName="collect-profiles" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.664140 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2efdbebd-cdbd-429a-bb93-c1983c87b38c" containerName="collect-profiles" Nov 23 07:18:30 crc kubenswrapper[5028]: E1123 07:18:30.664175 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="extract-utilities" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.664188 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="extract-utilities" Nov 23 07:18:30 crc kubenswrapper[5028]: E1123 07:18:30.664224 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="extract-content" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.664235 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="extract-content" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.664685 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2efdbebd-cdbd-429a-bb93-c1983c87b38c" containerName="collect-profiles" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.664727 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eaff43-60e7-4b2a-980c-4a306f9a9a26" containerName="registry-server" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.667452 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.679582 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7c9"] Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.730274 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwx5d\" (UniqueName: \"kubernetes.io/projected/bb7be35b-7cae-493d-a3c5-838430d06f10-kube-api-access-bwx5d\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.730369 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-utilities\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.730484 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-catalog-content\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.831668 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwx5d\" (UniqueName: \"kubernetes.io/projected/bb7be35b-7cae-493d-a3c5-838430d06f10-kube-api-access-bwx5d\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.831970 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-utilities\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.832178 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-catalog-content\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.832640 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-utilities\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.832677 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-catalog-content\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.852233 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwx5d\" (UniqueName: \"kubernetes.io/projected/bb7be35b-7cae-493d-a3c5-838430d06f10-kube-api-access-bwx5d\") pod \"redhat-marketplace-xw7c9\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.946916 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.947036 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.947103 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.947929 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:18:30 crc kubenswrapper[5028]: I1123 07:18:30.948083 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" gracePeriod=600 Nov 23 07:18:31 crc kubenswrapper[5028]: I1123 07:18:31.005353 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:31 crc kubenswrapper[5028]: E1123 07:18:31.079685 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:18:31 crc kubenswrapper[5028]: I1123 07:18:31.317016 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" exitCode=0 Nov 23 07:18:31 crc kubenswrapper[5028]: I1123 07:18:31.317176 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803"} Nov 23 07:18:31 crc kubenswrapper[5028]: I1123 07:18:31.317430 5028 scope.go:117] "RemoveContainer" containerID="a9f3c8aba32cb3954ec9074865809a85ea7ad623b69b523ea2d4be3943bce523" Nov 23 07:18:31 crc kubenswrapper[5028]: I1123 07:18:31.318032 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:18:31 crc kubenswrapper[5028]: E1123 07:18:31.318330 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:18:31 crc kubenswrapper[5028]: I1123 07:18:31.517508 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7c9"] Nov 23 07:18:32 crc kubenswrapper[5028]: I1123 07:18:32.329241 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerID="5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd" exitCode=0 Nov 23 07:18:32 crc kubenswrapper[5028]: I1123 07:18:32.329437 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7c9" event={"ID":"bb7be35b-7cae-493d-a3c5-838430d06f10","Type":"ContainerDied","Data":"5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd"} Nov 23 07:18:32 crc kubenswrapper[5028]: I1123 07:18:32.329459 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7c9" event={"ID":"bb7be35b-7cae-493d-a3c5-838430d06f10","Type":"ContainerStarted","Data":"19ff22e3961318a90c7af80a2d92dcdb00b966def0293c7ceed73c50c2a7a10e"} Nov 23 07:18:33 crc kubenswrapper[5028]: I1123 07:18:33.338221 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerID="1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01" exitCode=0 Nov 23 07:18:33 crc kubenswrapper[5028]: I1123 07:18:33.338261 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7c9" event={"ID":"bb7be35b-7cae-493d-a3c5-838430d06f10","Type":"ContainerDied","Data":"1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01"} Nov 23 07:18:34 crc kubenswrapper[5028]: I1123 07:18:34.347324 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7c9" event={"ID":"bb7be35b-7cae-493d-a3c5-838430d06f10","Type":"ContainerStarted","Data":"c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc"} Nov 23 07:18:34 crc kubenswrapper[5028]: I1123 07:18:34.368817 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xw7c9" podStartSLOduration=2.981475043 podStartE2EDuration="4.368796178s" podCreationTimestamp="2025-11-23 07:18:30 +0000 UTC" firstStartedPulling="2025-11-23 07:18:32.330738973 +0000 UTC m=+1696.028143752" lastFinishedPulling="2025-11-23 07:18:33.718060098 +0000 UTC m=+1697.415464887" observedRunningTime="2025-11-23 07:18:34.362586256 +0000 UTC m=+1698.059991035" watchObservedRunningTime="2025-11-23 07:18:34.368796178 +0000 UTC m=+1698.066200957" Nov 23 07:18:41 crc kubenswrapper[5028]: I1123 07:18:41.005851 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:41 crc kubenswrapper[5028]: I1123 07:18:41.006450 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:41 crc kubenswrapper[5028]: I1123 07:18:41.069805 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:41 crc kubenswrapper[5028]: I1123 07:18:41.470811 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:41 crc kubenswrapper[5028]: I1123 07:18:41.523614 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7c9"] Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.053704 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:18:43 crc kubenswrapper[5028]: E1123 07:18:43.054128 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.422387 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xw7c9" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="registry-server" containerID="cri-o://c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc" gracePeriod=2 Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.864228 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.970007 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-catalog-content\") pod \"bb7be35b-7cae-493d-a3c5-838430d06f10\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.970208 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-utilities\") pod \"bb7be35b-7cae-493d-a3c5-838430d06f10\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.970257 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwx5d\" (UniqueName: \"kubernetes.io/projected/bb7be35b-7cae-493d-a3c5-838430d06f10-kube-api-access-bwx5d\") pod \"bb7be35b-7cae-493d-a3c5-838430d06f10\" (UID: \"bb7be35b-7cae-493d-a3c5-838430d06f10\") " Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.971460 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-utilities" (OuterVolumeSpecName: "utilities") pod "bb7be35b-7cae-493d-a3c5-838430d06f10" (UID: "bb7be35b-7cae-493d-a3c5-838430d06f10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:18:43 crc kubenswrapper[5028]: I1123 07:18:43.977500 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7be35b-7cae-493d-a3c5-838430d06f10-kube-api-access-bwx5d" (OuterVolumeSpecName: "kube-api-access-bwx5d") pod "bb7be35b-7cae-493d-a3c5-838430d06f10" (UID: "bb7be35b-7cae-493d-a3c5-838430d06f10"). InnerVolumeSpecName "kube-api-access-bwx5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.000017 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb7be35b-7cae-493d-a3c5-838430d06f10" (UID: "bb7be35b-7cae-493d-a3c5-838430d06f10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.072270 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.072327 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwx5d\" (UniqueName: \"kubernetes.io/projected/bb7be35b-7cae-493d-a3c5-838430d06f10-kube-api-access-bwx5d\") on node \"crc\" DevicePath \"\"" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.072347 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7be35b-7cae-493d-a3c5-838430d06f10-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.431218 5028 generic.go:334] "Generic (PLEG): container finished" podID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerID="c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc" exitCode=0 Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.431274 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7c9" event={"ID":"bb7be35b-7cae-493d-a3c5-838430d06f10","Type":"ContainerDied","Data":"c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc"} Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.431314 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xw7c9" event={"ID":"bb7be35b-7cae-493d-a3c5-838430d06f10","Type":"ContainerDied","Data":"19ff22e3961318a90c7af80a2d92dcdb00b966def0293c7ceed73c50c2a7a10e"} Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.431342 5028 scope.go:117] "RemoveContainer" containerID="c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.431506 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xw7c9" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.474823 5028 scope.go:117] "RemoveContainer" containerID="1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.485929 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7c9"] Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.491502 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xw7c9"] Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.498316 5028 scope.go:117] "RemoveContainer" containerID="5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.529508 5028 scope.go:117] "RemoveContainer" containerID="c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc" Nov 23 07:18:44 crc kubenswrapper[5028]: E1123 07:18:44.531293 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc\": container with ID starting with c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc not found: ID does not exist" containerID="c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.531352 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc"} err="failed to get container status \"c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc\": rpc error: code = NotFound desc = could not find container \"c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc\": container with ID starting with c33ceb6007b942ec0738fb8cf0215015151dee5673bdc2b7abf77c82d767ecfc not found: ID does not exist" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.531396 5028 scope.go:117] "RemoveContainer" containerID="1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01" Nov 23 07:18:44 crc kubenswrapper[5028]: E1123 07:18:44.531840 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01\": container with ID starting with 1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01 not found: ID does not exist" containerID="1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.531878 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01"} err="failed to get container status \"1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01\": rpc error: code = NotFound desc = could not find container \"1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01\": container with ID starting with 1bc25e786bb7c3f622c7b8b5a6cc49e44a14fd0720533c8029690017bd1e5c01 not found: ID does not exist" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.531907 5028 scope.go:117] "RemoveContainer" containerID="5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd" Nov 23 07:18:44 crc kubenswrapper[5028]: E1123 07:18:44.532265 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd\": container with ID starting with 5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd not found: ID does not exist" containerID="5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd" Nov 23 07:18:44 crc kubenswrapper[5028]: I1123 07:18:44.532297 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd"} err="failed to get container status \"5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd\": rpc error: code = NotFound desc = could not find container \"5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd\": container with ID starting with 5000dcd87e53276cdbcae0369bbff44e45f9b3056def157640dc0892f9fff0fd not found: ID does not exist" Nov 23 07:18:45 crc kubenswrapper[5028]: I1123 07:18:45.074631 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" path="/var/lib/kubelet/pods/bb7be35b-7cae-493d-a3c5-838430d06f10/volumes" Nov 23 07:18:58 crc kubenswrapper[5028]: I1123 07:18:58.052711 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:18:58 crc kubenswrapper[5028]: E1123 07:18:58.053455 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:19:12 crc kubenswrapper[5028]: I1123 07:19:12.053941 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:19:12 crc kubenswrapper[5028]: E1123 07:19:12.054669 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:19:22 crc kubenswrapper[5028]: I1123 07:19:22.641723 5028 scope.go:117] "RemoveContainer" containerID="5dc8cea2a082f78c34b4f793fb541c20becf1183832ccdc56cdb5c470fec475a" Nov 23 07:19:22 crc kubenswrapper[5028]: I1123 07:19:22.666330 5028 scope.go:117] "RemoveContainer" containerID="dbdc0da81bb53c52de0946232513f986627786cc13a4935b407621df5a7225be" Nov 23 07:19:22 crc kubenswrapper[5028]: I1123 07:19:22.686586 5028 scope.go:117] "RemoveContainer" containerID="c99dbc45954ad6ec68de64bda24d6319695ecb2a85091d9874ecea7b556a7023" Nov 23 07:19:22 crc kubenswrapper[5028]: I1123 07:19:22.751135 5028 scope.go:117] "RemoveContainer" containerID="fb829f046617ccb1432e54fee659ef7e9fceb2ee18d795f15b09ddc7e9e8e047" Nov 23 07:19:22 crc kubenswrapper[5028]: I1123 07:19:22.771079 5028 scope.go:117] "RemoveContainer" containerID="3f849cfb8e74b4980c92c33ab679fd9a8d36c82079aa4ad05015977a5e743ef6" Nov 23 07:19:24 crc kubenswrapper[5028]: I1123 07:19:24.053275 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:19:24 crc kubenswrapper[5028]: E1123 07:19:24.053584 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:19:38 crc kubenswrapper[5028]: I1123 07:19:38.052469 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:19:38 crc kubenswrapper[5028]: E1123 07:19:38.053327 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:19:53 crc kubenswrapper[5028]: I1123 07:19:53.053635 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:19:53 crc kubenswrapper[5028]: E1123 07:19:53.054463 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:20:05 crc kubenswrapper[5028]: I1123 07:20:05.053708 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:20:05 crc kubenswrapper[5028]: E1123 07:20:05.055439 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:20:19 crc kubenswrapper[5028]: I1123 07:20:19.053338 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:20:19 crc kubenswrapper[5028]: E1123 07:20:19.054163 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:20:22 crc kubenswrapper[5028]: I1123 07:20:22.835934 5028 scope.go:117] "RemoveContainer" containerID="c0e438b5feadcc60de0125b6ead3243fc9fdba982d8c527a3af16187c90ff94e" Nov 23 07:20:22 crc kubenswrapper[5028]: I1123 07:20:22.856341 5028 scope.go:117] "RemoveContainer" containerID="60d5dfda719cca7c70347469f4adf01b003d4790534774d7508b2cbaec093460" Nov 23 07:20:32 crc kubenswrapper[5028]: I1123 07:20:32.052912 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:20:32 crc kubenswrapper[5028]: E1123 07:20:32.053632 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:20:43 crc kubenswrapper[5028]: I1123 07:20:43.053560 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:20:43 crc kubenswrapper[5028]: E1123 07:20:43.054531 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:20:55 crc kubenswrapper[5028]: I1123 07:20:55.053575 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:20:55 crc kubenswrapper[5028]: E1123 07:20:55.054601 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:21:07 crc kubenswrapper[5028]: I1123 07:21:07.061155 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:21:07 crc kubenswrapper[5028]: E1123 07:21:07.061658 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:21:20 crc kubenswrapper[5028]: I1123 07:21:20.053204 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:21:20 crc kubenswrapper[5028]: E1123 07:21:20.054118 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:21:35 crc kubenswrapper[5028]: I1123 07:21:35.054072 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:21:35 crc kubenswrapper[5028]: E1123 07:21:35.055465 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:21:48 crc kubenswrapper[5028]: I1123 07:21:48.053322 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:21:48 crc kubenswrapper[5028]: E1123 07:21:48.054062 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:22:01 crc kubenswrapper[5028]: I1123 07:22:01.053393 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:22:01 crc kubenswrapper[5028]: E1123 07:22:01.055364 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:22:16 crc kubenswrapper[5028]: I1123 07:22:16.053927 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:22:16 crc kubenswrapper[5028]: E1123 07:22:16.054991 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:22:31 crc kubenswrapper[5028]: I1123 07:22:31.053719 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:22:31 crc kubenswrapper[5028]: E1123 07:22:31.055413 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:22:44 crc kubenswrapper[5028]: I1123 07:22:44.052801 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:22:44 crc kubenswrapper[5028]: E1123 07:22:44.053558 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:22:58 crc kubenswrapper[5028]: I1123 07:22:58.053002 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:22:58 crc kubenswrapper[5028]: E1123 07:22:58.053714 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:23:11 crc kubenswrapper[5028]: I1123 07:23:11.052863 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:23:11 crc kubenswrapper[5028]: E1123 07:23:11.053596 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:23:24 crc kubenswrapper[5028]: I1123 07:23:24.053290 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:23:24 crc kubenswrapper[5028]: E1123 07:23:24.054637 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:23:36 crc kubenswrapper[5028]: I1123 07:23:36.052832 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:23:37 crc kubenswrapper[5028]: I1123 07:23:37.096203 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"782f86c38923576d22672d1444874303a2956eb9b0c634e5bc6ab205f922aa0e"} Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.381658 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xrf5g"] Nov 23 07:25:19 crc kubenswrapper[5028]: E1123 07:25:19.384101 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="extract-content" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.384160 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="extract-content" Nov 23 07:25:19 crc kubenswrapper[5028]: E1123 07:25:19.384197 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="registry-server" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.384210 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="registry-server" Nov 23 07:25:19 crc kubenswrapper[5028]: E1123 07:25:19.384262 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="extract-utilities" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.384275 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="extract-utilities" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.385161 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7be35b-7cae-493d-a3c5-838430d06f10" containerName="registry-server" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.395687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.403218 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xrf5g"] Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.519514 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-catalog-content\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.519559 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdt2\" (UniqueName: \"kubernetes.io/projected/1555c113-b287-4d84-809b-0d22217a3d5b-kube-api-access-swdt2\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.519582 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-utilities\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.620751 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-catalog-content\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.620820 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swdt2\" (UniqueName: \"kubernetes.io/projected/1555c113-b287-4d84-809b-0d22217a3d5b-kube-api-access-swdt2\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.620895 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-utilities\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.621412 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-catalog-content\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.621459 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-utilities\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.654696 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdt2\" (UniqueName: \"kubernetes.io/projected/1555c113-b287-4d84-809b-0d22217a3d5b-kube-api-access-swdt2\") pod \"certified-operators-xrf5g\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:19 crc kubenswrapper[5028]: I1123 07:25:19.728896 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:20 crc kubenswrapper[5028]: I1123 07:25:20.024450 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xrf5g"] Nov 23 07:25:20 crc kubenswrapper[5028]: I1123 07:25:20.782141 5028 generic.go:334] "Generic (PLEG): container finished" podID="1555c113-b287-4d84-809b-0d22217a3d5b" containerID="dc15bfa40ee7e3d2604f6ffa9c98fc4a1c75e01c37e06feecc5616b159b22007" exitCode=0 Nov 23 07:25:20 crc kubenswrapper[5028]: I1123 07:25:20.782208 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerDied","Data":"dc15bfa40ee7e3d2604f6ffa9c98fc4a1c75e01c37e06feecc5616b159b22007"} Nov 23 07:25:20 crc kubenswrapper[5028]: I1123 07:25:20.782613 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerStarted","Data":"be101ec881feefc8bad98b2659a080a425d698f37a278f800f95478556a78c28"} Nov 23 07:25:20 crc kubenswrapper[5028]: I1123 07:25:20.785014 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:25:21 crc kubenswrapper[5028]: I1123 07:25:21.792092 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerStarted","Data":"7f1207f1462e12fddbbc0d57f65b680c54c52581b3f28ebb0b923457d123a22d"} Nov 23 07:25:22 crc kubenswrapper[5028]: I1123 07:25:22.800788 5028 generic.go:334] "Generic (PLEG): container finished" podID="1555c113-b287-4d84-809b-0d22217a3d5b" containerID="7f1207f1462e12fddbbc0d57f65b680c54c52581b3f28ebb0b923457d123a22d" exitCode=0 Nov 23 07:25:22 crc kubenswrapper[5028]: I1123 07:25:22.800835 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerDied","Data":"7f1207f1462e12fddbbc0d57f65b680c54c52581b3f28ebb0b923457d123a22d"} Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.521970 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6gwqh"] Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.524037 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.532505 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gwqh"] Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.590254 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-utilities\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.590331 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-catalog-content\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.590395 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlsx8\" (UniqueName: \"kubernetes.io/projected/2319a12e-1295-4e77-8428-7b0f9fc94765-kube-api-access-jlsx8\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.691404 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-utilities\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.691773 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-catalog-content\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.691810 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlsx8\" (UniqueName: \"kubernetes.io/projected/2319a12e-1295-4e77-8428-7b0f9fc94765-kube-api-access-jlsx8\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.691985 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-utilities\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.692099 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-catalog-content\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.712553 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlsx8\" (UniqueName: \"kubernetes.io/projected/2319a12e-1295-4e77-8428-7b0f9fc94765-kube-api-access-jlsx8\") pod \"redhat-operators-6gwqh\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.810530 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerStarted","Data":"5202c0d25d4b09b0ee238f82f51fbba194165cae5269713f0375c874700d874a"} Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.830555 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xrf5g" podStartSLOduration=2.36113254 podStartE2EDuration="4.830539617s" podCreationTimestamp="2025-11-23 07:25:19 +0000 UTC" firstStartedPulling="2025-11-23 07:25:20.784496802 +0000 UTC m=+2104.481901611" lastFinishedPulling="2025-11-23 07:25:23.253903909 +0000 UTC m=+2106.951308688" observedRunningTime="2025-11-23 07:25:23.82863068 +0000 UTC m=+2107.526035459" watchObservedRunningTime="2025-11-23 07:25:23.830539617 +0000 UTC m=+2107.527944396" Nov 23 07:25:23 crc kubenswrapper[5028]: I1123 07:25:23.867827 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:24 crc kubenswrapper[5028]: I1123 07:25:24.312400 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gwqh"] Nov 23 07:25:24 crc kubenswrapper[5028]: I1123 07:25:24.818936 5028 generic.go:334] "Generic (PLEG): container finished" podID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerID="8a8ea49666a0dc6beb5923ba5e65899c753203d95762023795bfbffb03aa1ab9" exitCode=0 Nov 23 07:25:24 crc kubenswrapper[5028]: I1123 07:25:24.819011 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerDied","Data":"8a8ea49666a0dc6beb5923ba5e65899c753203d95762023795bfbffb03aa1ab9"} Nov 23 07:25:24 crc kubenswrapper[5028]: I1123 07:25:24.819073 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerStarted","Data":"936ecc810813c8ca4c71c93fce592021d1aa12dbfeb496c21f93924be03a45d5"} Nov 23 07:25:25 crc kubenswrapper[5028]: I1123 07:25:25.829180 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerStarted","Data":"aaa4a013c97fe7f12a837934fe63b2fe1b92864dc0ee6468958070bf1ebe9b36"} Nov 23 07:25:26 crc kubenswrapper[5028]: I1123 07:25:26.839044 5028 generic.go:334] "Generic (PLEG): container finished" podID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerID="aaa4a013c97fe7f12a837934fe63b2fe1b92864dc0ee6468958070bf1ebe9b36" exitCode=0 Nov 23 07:25:26 crc kubenswrapper[5028]: I1123 07:25:26.839095 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerDied","Data":"aaa4a013c97fe7f12a837934fe63b2fe1b92864dc0ee6468958070bf1ebe9b36"} Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.852401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerStarted","Data":"837c6f7b140a7be71db48d2157394aa5123ba90bf9c1ce914d11bd8519810873"} Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.875317 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6gwqh" podStartSLOduration=2.384619802 podStartE2EDuration="4.87529279s" podCreationTimestamp="2025-11-23 07:25:23 +0000 UTC" firstStartedPulling="2025-11-23 07:25:24.821188767 +0000 UTC m=+2108.518593546" lastFinishedPulling="2025-11-23 07:25:27.311861755 +0000 UTC m=+2111.009266534" observedRunningTime="2025-11-23 07:25:27.873399854 +0000 UTC m=+2111.570804663" watchObservedRunningTime="2025-11-23 07:25:27.87529279 +0000 UTC m=+2111.572697579" Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.930854 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7c8gg"] Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.942208 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.949866 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57330ce-b7e7-4850-ac33-d0eb0438206f-utilities\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.949912 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57330ce-b7e7-4850-ac33-d0eb0438206f-catalog-content\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.949956 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpq56\" (UniqueName: \"kubernetes.io/projected/b57330ce-b7e7-4850-ac33-d0eb0438206f-kube-api-access-wpq56\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:27 crc kubenswrapper[5028]: I1123 07:25:27.969470 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c8gg"] Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.051536 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57330ce-b7e7-4850-ac33-d0eb0438206f-utilities\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.051606 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57330ce-b7e7-4850-ac33-d0eb0438206f-catalog-content\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.051641 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpq56\" (UniqueName: \"kubernetes.io/projected/b57330ce-b7e7-4850-ac33-d0eb0438206f-kube-api-access-wpq56\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.051918 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b57330ce-b7e7-4850-ac33-d0eb0438206f-utilities\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.052579 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b57330ce-b7e7-4850-ac33-d0eb0438206f-catalog-content\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.073955 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpq56\" (UniqueName: \"kubernetes.io/projected/b57330ce-b7e7-4850-ac33-d0eb0438206f-kube-api-access-wpq56\") pod \"community-operators-7c8gg\" (UID: \"b57330ce-b7e7-4850-ac33-d0eb0438206f\") " pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.269728 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.735779 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c8gg"] Nov 23 07:25:28 crc kubenswrapper[5028]: W1123 07:25:28.740101 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb57330ce_b7e7_4850_ac33_d0eb0438206f.slice/crio-70ae18f293cb8f8bc1ed7837cc0dc7a2fc61c0ed56afe7f142b3f8a9dce02b67 WatchSource:0}: Error finding container 70ae18f293cb8f8bc1ed7837cc0dc7a2fc61c0ed56afe7f142b3f8a9dce02b67: Status 404 returned error can't find the container with id 70ae18f293cb8f8bc1ed7837cc0dc7a2fc61c0ed56afe7f142b3f8a9dce02b67 Nov 23 07:25:28 crc kubenswrapper[5028]: I1123 07:25:28.861024 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c8gg" event={"ID":"b57330ce-b7e7-4850-ac33-d0eb0438206f","Type":"ContainerStarted","Data":"70ae18f293cb8f8bc1ed7837cc0dc7a2fc61c0ed56afe7f142b3f8a9dce02b67"} Nov 23 07:25:29 crc kubenswrapper[5028]: I1123 07:25:29.729359 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:29 crc kubenswrapper[5028]: I1123 07:25:29.729744 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:29 crc kubenswrapper[5028]: I1123 07:25:29.772820 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:29 crc kubenswrapper[5028]: I1123 07:25:29.870387 5028 generic.go:334] "Generic (PLEG): container finished" podID="b57330ce-b7e7-4850-ac33-d0eb0438206f" containerID="55ee84adddc1c6bc7a1159236b02d8be32b424b96cf021953d9b87e38ad12cb4" exitCode=0 Nov 23 07:25:29 crc kubenswrapper[5028]: I1123 07:25:29.870487 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c8gg" event={"ID":"b57330ce-b7e7-4850-ac33-d0eb0438206f","Type":"ContainerDied","Data":"55ee84adddc1c6bc7a1159236b02d8be32b424b96cf021953d9b87e38ad12cb4"} Nov 23 07:25:29 crc kubenswrapper[5028]: I1123 07:25:29.928526 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:32 crc kubenswrapper[5028]: I1123 07:25:32.312017 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xrf5g"] Nov 23 07:25:32 crc kubenswrapper[5028]: I1123 07:25:32.312646 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xrf5g" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="registry-server" containerID="cri-o://5202c0d25d4b09b0ee238f82f51fbba194165cae5269713f0375c874700d874a" gracePeriod=2 Nov 23 07:25:32 crc kubenswrapper[5028]: I1123 07:25:32.893745 5028 generic.go:334] "Generic (PLEG): container finished" podID="1555c113-b287-4d84-809b-0d22217a3d5b" containerID="5202c0d25d4b09b0ee238f82f51fbba194165cae5269713f0375c874700d874a" exitCode=0 Nov 23 07:25:32 crc kubenswrapper[5028]: I1123 07:25:32.893791 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerDied","Data":"5202c0d25d4b09b0ee238f82f51fbba194165cae5269713f0375c874700d874a"} Nov 23 07:25:33 crc kubenswrapper[5028]: I1123 07:25:33.868511 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:33 crc kubenswrapper[5028]: I1123 07:25:33.868758 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:33 crc kubenswrapper[5028]: I1123 07:25:33.910123 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:33 crc kubenswrapper[5028]: I1123 07:25:33.964845 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.210087 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.317661 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swdt2\" (UniqueName: \"kubernetes.io/projected/1555c113-b287-4d84-809b-0d22217a3d5b-kube-api-access-swdt2\") pod \"1555c113-b287-4d84-809b-0d22217a3d5b\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.317789 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-utilities\") pod \"1555c113-b287-4d84-809b-0d22217a3d5b\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.317816 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-catalog-content\") pod \"1555c113-b287-4d84-809b-0d22217a3d5b\" (UID: \"1555c113-b287-4d84-809b-0d22217a3d5b\") " Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.318930 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-utilities" (OuterVolumeSpecName: "utilities") pod "1555c113-b287-4d84-809b-0d22217a3d5b" (UID: "1555c113-b287-4d84-809b-0d22217a3d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.323902 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1555c113-b287-4d84-809b-0d22217a3d5b-kube-api-access-swdt2" (OuterVolumeSpecName: "kube-api-access-swdt2") pod "1555c113-b287-4d84-809b-0d22217a3d5b" (UID: "1555c113-b287-4d84-809b-0d22217a3d5b"). InnerVolumeSpecName "kube-api-access-swdt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.368700 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1555c113-b287-4d84-809b-0d22217a3d5b" (UID: "1555c113-b287-4d84-809b-0d22217a3d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.419591 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swdt2\" (UniqueName: \"kubernetes.io/projected/1555c113-b287-4d84-809b-0d22217a3d5b-kube-api-access-swdt2\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.419619 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.419628 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1555c113-b287-4d84-809b-0d22217a3d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.909461 5028 generic.go:334] "Generic (PLEG): container finished" podID="b57330ce-b7e7-4850-ac33-d0eb0438206f" containerID="7e03ffb673f805e3c7f69b28ff822d44fe3bc051078dae8869190be2f664f992" exitCode=0 Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.909527 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c8gg" event={"ID":"b57330ce-b7e7-4850-ac33-d0eb0438206f","Type":"ContainerDied","Data":"7e03ffb673f805e3c7f69b28ff822d44fe3bc051078dae8869190be2f664f992"} Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.913261 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xrf5g" event={"ID":"1555c113-b287-4d84-809b-0d22217a3d5b","Type":"ContainerDied","Data":"be101ec881feefc8bad98b2659a080a425d698f37a278f800f95478556a78c28"} Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.913256 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xrf5g" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.913328 5028 scope.go:117] "RemoveContainer" containerID="5202c0d25d4b09b0ee238f82f51fbba194165cae5269713f0375c874700d874a" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.939422 5028 scope.go:117] "RemoveContainer" containerID="7f1207f1462e12fddbbc0d57f65b680c54c52581b3f28ebb0b923457d123a22d" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.959781 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xrf5g"] Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.963437 5028 scope.go:117] "RemoveContainer" containerID="dc15bfa40ee7e3d2604f6ffa9c98fc4a1c75e01c37e06feecc5616b159b22007" Nov 23 07:25:34 crc kubenswrapper[5028]: I1123 07:25:34.965734 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xrf5g"] Nov 23 07:25:35 crc kubenswrapper[5028]: I1123 07:25:35.062029 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" path="/var/lib/kubelet/pods/1555c113-b287-4d84-809b-0d22217a3d5b/volumes" Nov 23 07:25:35 crc kubenswrapper[5028]: I1123 07:25:35.923264 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7c8gg" event={"ID":"b57330ce-b7e7-4850-ac33-d0eb0438206f","Type":"ContainerStarted","Data":"674ba7740111297b4095e4b520d4e017cc1f34fdbba0de31c18cb454f51f22fe"} Nov 23 07:25:35 crc kubenswrapper[5028]: I1123 07:25:35.947489 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7c8gg" podStartSLOduration=3.504444794 podStartE2EDuration="8.947466009s" podCreationTimestamp="2025-11-23 07:25:27 +0000 UTC" firstStartedPulling="2025-11-23 07:25:29.871905784 +0000 UTC m=+2113.569310563" lastFinishedPulling="2025-11-23 07:25:35.314926999 +0000 UTC m=+2119.012331778" observedRunningTime="2025-11-23 07:25:35.941331808 +0000 UTC m=+2119.638736587" watchObservedRunningTime="2025-11-23 07:25:35.947466009 +0000 UTC m=+2119.644870798" Nov 23 07:25:36 crc kubenswrapper[5028]: I1123 07:25:36.314120 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gwqh"] Nov 23 07:25:36 crc kubenswrapper[5028]: I1123 07:25:36.314371 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6gwqh" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="registry-server" containerID="cri-o://837c6f7b140a7be71db48d2157394aa5123ba90bf9c1ce914d11bd8519810873" gracePeriod=2 Nov 23 07:25:37 crc kubenswrapper[5028]: I1123 07:25:37.941989 5028 generic.go:334] "Generic (PLEG): container finished" podID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerID="837c6f7b140a7be71db48d2157394aa5123ba90bf9c1ce914d11bd8519810873" exitCode=0 Nov 23 07:25:37 crc kubenswrapper[5028]: I1123 07:25:37.941990 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerDied","Data":"837c6f7b140a7be71db48d2157394aa5123ba90bf9c1ce914d11bd8519810873"} Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.270373 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.270506 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.311706 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.495920 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.580984 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-utilities\") pod \"2319a12e-1295-4e77-8428-7b0f9fc94765\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.581059 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlsx8\" (UniqueName: \"kubernetes.io/projected/2319a12e-1295-4e77-8428-7b0f9fc94765-kube-api-access-jlsx8\") pod \"2319a12e-1295-4e77-8428-7b0f9fc94765\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.581282 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-catalog-content\") pod \"2319a12e-1295-4e77-8428-7b0f9fc94765\" (UID: \"2319a12e-1295-4e77-8428-7b0f9fc94765\") " Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.585383 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-utilities" (OuterVolumeSpecName: "utilities") pod "2319a12e-1295-4e77-8428-7b0f9fc94765" (UID: "2319a12e-1295-4e77-8428-7b0f9fc94765"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.589527 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2319a12e-1295-4e77-8428-7b0f9fc94765-kube-api-access-jlsx8" (OuterVolumeSpecName: "kube-api-access-jlsx8") pod "2319a12e-1295-4e77-8428-7b0f9fc94765" (UID: "2319a12e-1295-4e77-8428-7b0f9fc94765"). InnerVolumeSpecName "kube-api-access-jlsx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.675221 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2319a12e-1295-4e77-8428-7b0f9fc94765" (UID: "2319a12e-1295-4e77-8428-7b0f9fc94765"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.683695 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.683731 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlsx8\" (UniqueName: \"kubernetes.io/projected/2319a12e-1295-4e77-8428-7b0f9fc94765-kube-api-access-jlsx8\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.683768 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2319a12e-1295-4e77-8428-7b0f9fc94765-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.955684 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gwqh" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.955767 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gwqh" event={"ID":"2319a12e-1295-4e77-8428-7b0f9fc94765","Type":"ContainerDied","Data":"936ecc810813c8ca4c71c93fce592021d1aa12dbfeb496c21f93924be03a45d5"} Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.955826 5028 scope.go:117] "RemoveContainer" containerID="837c6f7b140a7be71db48d2157394aa5123ba90bf9c1ce914d11bd8519810873" Nov 23 07:25:38 crc kubenswrapper[5028]: I1123 07:25:38.997533 5028 scope.go:117] "RemoveContainer" containerID="aaa4a013c97fe7f12a837934fe63b2fe1b92864dc0ee6468958070bf1ebe9b36" Nov 23 07:25:39 crc kubenswrapper[5028]: I1123 07:25:39.002772 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gwqh"] Nov 23 07:25:39 crc kubenswrapper[5028]: I1123 07:25:39.009479 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6gwqh"] Nov 23 07:25:39 crc kubenswrapper[5028]: I1123 07:25:39.019800 5028 scope.go:117] "RemoveContainer" containerID="8a8ea49666a0dc6beb5923ba5e65899c753203d95762023795bfbffb03aa1ab9" Nov 23 07:25:39 crc kubenswrapper[5028]: I1123 07:25:39.062863 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" path="/var/lib/kubelet/pods/2319a12e-1295-4e77-8428-7b0f9fc94765/volumes" Nov 23 07:25:48 crc kubenswrapper[5028]: I1123 07:25:48.321261 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7c8gg" Nov 23 07:25:48 crc kubenswrapper[5028]: I1123 07:25:48.390432 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7c8gg"] Nov 23 07:25:48 crc kubenswrapper[5028]: I1123 07:25:48.441651 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlftc"] Nov 23 07:25:48 crc kubenswrapper[5028]: I1123 07:25:48.441998 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xlftc" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="registry-server" containerID="cri-o://fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b" gracePeriod=2 Nov 23 07:25:48 crc kubenswrapper[5028]: I1123 07:25:48.947985 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlftc" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.059290 5028 generic.go:334] "Generic (PLEG): container finished" podID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerID="fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b" exitCode=0 Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.059417 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlftc" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.064369 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerDied","Data":"fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b"} Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.064409 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlftc" event={"ID":"c3b796b1-e1ea-4f63-8639-dd8575ea6985","Type":"ContainerDied","Data":"88b181da018898fc1c56237b6fa92ec2d8392bb54ed0f57675154a4cdba9aff5"} Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.064427 5028 scope.go:117] "RemoveContainer" containerID="fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.085233 5028 scope.go:117] "RemoveContainer" containerID="c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.102122 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-catalog-content\") pod \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.102168 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llh59\" (UniqueName: \"kubernetes.io/projected/c3b796b1-e1ea-4f63-8639-dd8575ea6985-kube-api-access-llh59\") pod \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.102289 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-utilities\") pod \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\" (UID: \"c3b796b1-e1ea-4f63-8639-dd8575ea6985\") " Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.102812 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-utilities" (OuterVolumeSpecName: "utilities") pod "c3b796b1-e1ea-4f63-8639-dd8575ea6985" (UID: "c3b796b1-e1ea-4f63-8639-dd8575ea6985"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.109028 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b796b1-e1ea-4f63-8639-dd8575ea6985-kube-api-access-llh59" (OuterVolumeSpecName: "kube-api-access-llh59") pod "c3b796b1-e1ea-4f63-8639-dd8575ea6985" (UID: "c3b796b1-e1ea-4f63-8639-dd8575ea6985"). InnerVolumeSpecName "kube-api-access-llh59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.109177 5028 scope.go:117] "RemoveContainer" containerID="2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.151491 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b796b1-e1ea-4f63-8639-dd8575ea6985" (UID: "c3b796b1-e1ea-4f63-8639-dd8575ea6985"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.159121 5028 scope.go:117] "RemoveContainer" containerID="fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b" Nov 23 07:25:49 crc kubenswrapper[5028]: E1123 07:25:49.163724 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b\": container with ID starting with fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b not found: ID does not exist" containerID="fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.163760 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b"} err="failed to get container status \"fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b\": rpc error: code = NotFound desc = could not find container \"fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b\": container with ID starting with fa58cebd2beaaef6df1289278f7bc114499bf8f79cc129243b67fa2af06d882b not found: ID does not exist" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.163781 5028 scope.go:117] "RemoveContainer" containerID="c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c" Nov 23 07:25:49 crc kubenswrapper[5028]: E1123 07:25:49.164043 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c\": container with ID starting with c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c not found: ID does not exist" containerID="c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.164066 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c"} err="failed to get container status \"c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c\": rpc error: code = NotFound desc = could not find container \"c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c\": container with ID starting with c281624405fd77380a24c2ac8f412711b10125e8b11b3c7100cdd76fa76e636c not found: ID does not exist" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.164080 5028 scope.go:117] "RemoveContainer" containerID="2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f" Nov 23 07:25:49 crc kubenswrapper[5028]: E1123 07:25:49.164975 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f\": container with ID starting with 2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f not found: ID does not exist" containerID="2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.164996 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f"} err="failed to get container status \"2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f\": rpc error: code = NotFound desc = could not find container \"2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f\": container with ID starting with 2d553ebc6db69df91102941f68159f49654851028bfa93070c93a7a39ea05d6f not found: ID does not exist" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.203907 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.203936 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b796b1-e1ea-4f63-8639-dd8575ea6985-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.203947 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llh59\" (UniqueName: \"kubernetes.io/projected/c3b796b1-e1ea-4f63-8639-dd8575ea6985-kube-api-access-llh59\") on node \"crc\" DevicePath \"\"" Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.402275 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlftc"] Nov 23 07:25:49 crc kubenswrapper[5028]: I1123 07:25:49.409435 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xlftc"] Nov 23 07:25:51 crc kubenswrapper[5028]: I1123 07:25:51.059961 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" path="/var/lib/kubelet/pods/c3b796b1-e1ea-4f63-8639-dd8575ea6985/volumes" Nov 23 07:26:00 crc kubenswrapper[5028]: I1123 07:26:00.946207 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:26:00 crc kubenswrapper[5028]: I1123 07:26:00.946787 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:26:30 crc kubenswrapper[5028]: I1123 07:26:30.945912 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:26:30 crc kubenswrapper[5028]: I1123 07:26:30.946603 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:27:00 crc kubenswrapper[5028]: I1123 07:27:00.946809 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:27:00 crc kubenswrapper[5028]: I1123 07:27:00.947648 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:27:00 crc kubenswrapper[5028]: I1123 07:27:00.947737 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:27:00 crc kubenswrapper[5028]: I1123 07:27:00.948598 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"782f86c38923576d22672d1444874303a2956eb9b0c634e5bc6ab205f922aa0e"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:27:00 crc kubenswrapper[5028]: I1123 07:27:00.948689 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://782f86c38923576d22672d1444874303a2956eb9b0c634e5bc6ab205f922aa0e" gracePeriod=600 Nov 23 07:27:01 crc kubenswrapper[5028]: I1123 07:27:01.678916 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="782f86c38923576d22672d1444874303a2956eb9b0c634e5bc6ab205f922aa0e" exitCode=0 Nov 23 07:27:01 crc kubenswrapper[5028]: I1123 07:27:01.679017 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"782f86c38923576d22672d1444874303a2956eb9b0c634e5bc6ab205f922aa0e"} Nov 23 07:27:01 crc kubenswrapper[5028]: I1123 07:27:01.680810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b"} Nov 23 07:27:01 crc kubenswrapper[5028]: I1123 07:27:01.680886 5028 scope.go:117] "RemoveContainer" containerID="d5dce1cd603d3db7197a7a662e0a6c8133f6b01714fa087b9e5c4b692d887803" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.514465 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dvblx"] Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515373 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="extract-content" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515389 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="extract-content" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515403 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="extract-utilities" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515411 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="extract-utilities" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515424 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515431 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515442 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515450 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515467 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="extract-utilities" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515474 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="extract-utilities" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515486 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="extract-content" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515494 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="extract-content" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515509 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515519 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515534 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="extract-utilities" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515552 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="extract-utilities" Nov 23 07:28:34 crc kubenswrapper[5028]: E1123 07:28:34.515573 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="extract-content" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515581 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="extract-content" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515773 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2319a12e-1295-4e77-8428-7b0f9fc94765" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515799 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1555c113-b287-4d84-809b-0d22217a3d5b" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.515813 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b796b1-e1ea-4f63-8639-dd8575ea6985" containerName="registry-server" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.517189 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.524773 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvblx"] Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.618529 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9482\" (UniqueName: \"kubernetes.io/projected/c4423a10-6187-4732-b2e8-dbc1638a0474-kube-api-access-z9482\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.618892 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-catalog-content\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.618995 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-utilities\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.720548 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9482\" (UniqueName: \"kubernetes.io/projected/c4423a10-6187-4732-b2e8-dbc1638a0474-kube-api-access-z9482\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.720597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-catalog-content\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.720680 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-utilities\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.721174 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-utilities\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.721237 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-catalog-content\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.740356 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9482\" (UniqueName: \"kubernetes.io/projected/c4423a10-6187-4732-b2e8-dbc1638a0474-kube-api-access-z9482\") pod \"redhat-marketplace-dvblx\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:34 crc kubenswrapper[5028]: I1123 07:28:34.884768 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:35 crc kubenswrapper[5028]: I1123 07:28:35.311360 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvblx"] Nov 23 07:28:35 crc kubenswrapper[5028]: I1123 07:28:35.504487 5028 generic.go:334] "Generic (PLEG): container finished" podID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerID="775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f" exitCode=0 Nov 23 07:28:35 crc kubenswrapper[5028]: I1123 07:28:35.504525 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvblx" event={"ID":"c4423a10-6187-4732-b2e8-dbc1638a0474","Type":"ContainerDied","Data":"775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f"} Nov 23 07:28:35 crc kubenswrapper[5028]: I1123 07:28:35.504548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvblx" event={"ID":"c4423a10-6187-4732-b2e8-dbc1638a0474","Type":"ContainerStarted","Data":"2a3782d1fa504fbd9047e173ece407e645378607dfc1584d75ec2c43cfbf41ad"} Nov 23 07:28:36 crc kubenswrapper[5028]: I1123 07:28:36.513736 5028 generic.go:334] "Generic (PLEG): container finished" podID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerID="443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd" exitCode=0 Nov 23 07:28:36 crc kubenswrapper[5028]: I1123 07:28:36.513785 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvblx" event={"ID":"c4423a10-6187-4732-b2e8-dbc1638a0474","Type":"ContainerDied","Data":"443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd"} Nov 23 07:28:37 crc kubenswrapper[5028]: I1123 07:28:37.522716 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvblx" event={"ID":"c4423a10-6187-4732-b2e8-dbc1638a0474","Type":"ContainerStarted","Data":"ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e"} Nov 23 07:28:37 crc kubenswrapper[5028]: I1123 07:28:37.544617 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dvblx" podStartSLOduration=1.817220314 podStartE2EDuration="3.544598426s" podCreationTimestamp="2025-11-23 07:28:34 +0000 UTC" firstStartedPulling="2025-11-23 07:28:35.507081259 +0000 UTC m=+2299.204486038" lastFinishedPulling="2025-11-23 07:28:37.234459381 +0000 UTC m=+2300.931864150" observedRunningTime="2025-11-23 07:28:37.54272823 +0000 UTC m=+2301.240133009" watchObservedRunningTime="2025-11-23 07:28:37.544598426 +0000 UTC m=+2301.242003205" Nov 23 07:28:44 crc kubenswrapper[5028]: I1123 07:28:44.885570 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:44 crc kubenswrapper[5028]: I1123 07:28:44.886104 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:44 crc kubenswrapper[5028]: I1123 07:28:44.930565 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:45 crc kubenswrapper[5028]: I1123 07:28:45.628754 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:45 crc kubenswrapper[5028]: I1123 07:28:45.672985 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvblx"] Nov 23 07:28:47 crc kubenswrapper[5028]: I1123 07:28:47.603192 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dvblx" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="registry-server" containerID="cri-o://ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e" gracePeriod=2 Nov 23 07:28:47 crc kubenswrapper[5028]: I1123 07:28:47.987244 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.018475 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-catalog-content\") pod \"c4423a10-6187-4732-b2e8-dbc1638a0474\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.018554 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-utilities\") pod \"c4423a10-6187-4732-b2e8-dbc1638a0474\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.018601 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9482\" (UniqueName: \"kubernetes.io/projected/c4423a10-6187-4732-b2e8-dbc1638a0474-kube-api-access-z9482\") pod \"c4423a10-6187-4732-b2e8-dbc1638a0474\" (UID: \"c4423a10-6187-4732-b2e8-dbc1638a0474\") " Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.019518 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-utilities" (OuterVolumeSpecName: "utilities") pod "c4423a10-6187-4732-b2e8-dbc1638a0474" (UID: "c4423a10-6187-4732-b2e8-dbc1638a0474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.034190 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4423a10-6187-4732-b2e8-dbc1638a0474-kube-api-access-z9482" (OuterVolumeSpecName: "kube-api-access-z9482") pod "c4423a10-6187-4732-b2e8-dbc1638a0474" (UID: "c4423a10-6187-4732-b2e8-dbc1638a0474"). InnerVolumeSpecName "kube-api-access-z9482". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.044283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4423a10-6187-4732-b2e8-dbc1638a0474" (UID: "c4423a10-6187-4732-b2e8-dbc1638a0474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.120206 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9482\" (UniqueName: \"kubernetes.io/projected/c4423a10-6187-4732-b2e8-dbc1638a0474-kube-api-access-z9482\") on node \"crc\" DevicePath \"\"" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.120478 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.120489 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4423a10-6187-4732-b2e8-dbc1638a0474-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.613145 5028 generic.go:334] "Generic (PLEG): container finished" podID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerID="ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e" exitCode=0 Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.613183 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvblx" event={"ID":"c4423a10-6187-4732-b2e8-dbc1638a0474","Type":"ContainerDied","Data":"ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e"} Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.613213 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dvblx" event={"ID":"c4423a10-6187-4732-b2e8-dbc1638a0474","Type":"ContainerDied","Data":"2a3782d1fa504fbd9047e173ece407e645378607dfc1584d75ec2c43cfbf41ad"} Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.613232 5028 scope.go:117] "RemoveContainer" containerID="ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.613289 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dvblx" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.631502 5028 scope.go:117] "RemoveContainer" containerID="443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.647846 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvblx"] Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.652793 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dvblx"] Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.653485 5028 scope.go:117] "RemoveContainer" containerID="775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.680198 5028 scope.go:117] "RemoveContainer" containerID="ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e" Nov 23 07:28:48 crc kubenswrapper[5028]: E1123 07:28:48.680473 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e\": container with ID starting with ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e not found: ID does not exist" containerID="ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.680507 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e"} err="failed to get container status \"ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e\": rpc error: code = NotFound desc = could not find container \"ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e\": container with ID starting with ab056095355cccb2d1a74338c8529700bb2de1d23bbf37b1b8987decac40407e not found: ID does not exist" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.680525 5028 scope.go:117] "RemoveContainer" containerID="443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd" Nov 23 07:28:48 crc kubenswrapper[5028]: E1123 07:28:48.680681 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd\": container with ID starting with 443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd not found: ID does not exist" containerID="443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.680701 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd"} err="failed to get container status \"443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd\": rpc error: code = NotFound desc = could not find container \"443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd\": container with ID starting with 443571e192011f5f2526c2496b88d8d8103c03cef0b3e6032390220dca1437fd not found: ID does not exist" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.680715 5028 scope.go:117] "RemoveContainer" containerID="775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f" Nov 23 07:28:48 crc kubenswrapper[5028]: E1123 07:28:48.680845 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f\": container with ID starting with 775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f not found: ID does not exist" containerID="775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f" Nov 23 07:28:48 crc kubenswrapper[5028]: I1123 07:28:48.680866 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f"} err="failed to get container status \"775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f\": rpc error: code = NotFound desc = could not find container \"775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f\": container with ID starting with 775d9698c95d442bff649c387efea09d3c7714098efad9d5d31bc06e22b5935f not found: ID does not exist" Nov 23 07:28:49 crc kubenswrapper[5028]: I1123 07:28:49.062667 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" path="/var/lib/kubelet/pods/c4423a10-6187-4732-b2e8-dbc1638a0474/volumes" Nov 23 07:29:30 crc kubenswrapper[5028]: I1123 07:29:30.946711 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:29:30 crc kubenswrapper[5028]: I1123 07:29:30.947394 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.152844 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj"] Nov 23 07:30:00 crc kubenswrapper[5028]: E1123 07:30:00.153986 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="registry-server" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.154032 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="registry-server" Nov 23 07:30:00 crc kubenswrapper[5028]: E1123 07:30:00.154062 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="extract-content" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.154074 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="extract-content" Nov 23 07:30:00 crc kubenswrapper[5028]: E1123 07:30:00.154125 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="extract-utilities" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.154138 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="extract-utilities" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.154399 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4423a10-6187-4732-b2e8-dbc1638a0474" containerName="registry-server" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.155166 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.157115 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.157675 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj"] Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.157767 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.330451 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2342d5-d133-44ae-957c-1a77cf088185-secret-volume\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.330524 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2342d5-d133-44ae-957c-1a77cf088185-config-volume\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.330556 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv29j\" (UniqueName: \"kubernetes.io/projected/2d2342d5-d133-44ae-957c-1a77cf088185-kube-api-access-bv29j\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.432398 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv29j\" (UniqueName: \"kubernetes.io/projected/2d2342d5-d133-44ae-957c-1a77cf088185-kube-api-access-bv29j\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.432521 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2342d5-d133-44ae-957c-1a77cf088185-secret-volume\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.432573 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2342d5-d133-44ae-957c-1a77cf088185-config-volume\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.433523 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2342d5-d133-44ae-957c-1a77cf088185-config-volume\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.451691 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2342d5-d133-44ae-957c-1a77cf088185-secret-volume\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.454149 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv29j\" (UniqueName: \"kubernetes.io/projected/2d2342d5-d133-44ae-957c-1a77cf088185-kube-api-access-bv29j\") pod \"collect-profiles-29398050-m6kdj\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.482124 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.919058 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj"] Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.946147 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:30:00 crc kubenswrapper[5028]: I1123 07:30:00.946223 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:30:01 crc kubenswrapper[5028]: I1123 07:30:01.198737 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" event={"ID":"2d2342d5-d133-44ae-957c-1a77cf088185","Type":"ContainerStarted","Data":"c7c51d505ec8647d45a099c5fe1e422e5bcd9776e93ec88296d94de0425b2c86"} Nov 23 07:30:01 crc kubenswrapper[5028]: I1123 07:30:01.199222 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" event={"ID":"2d2342d5-d133-44ae-957c-1a77cf088185","Type":"ContainerStarted","Data":"8bb6456091a490c122cabf14a2493bb264057dcdb47c901c38ca8b572e5a6419"} Nov 23 07:30:01 crc kubenswrapper[5028]: I1123 07:30:01.219443 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" podStartSLOduration=1.219414976 podStartE2EDuration="1.219414976s" podCreationTimestamp="2025-11-23 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 07:30:01.21793352 +0000 UTC m=+2384.915338339" watchObservedRunningTime="2025-11-23 07:30:01.219414976 +0000 UTC m=+2384.916819755" Nov 23 07:30:02 crc kubenswrapper[5028]: I1123 07:30:02.209152 5028 generic.go:334] "Generic (PLEG): container finished" podID="2d2342d5-d133-44ae-957c-1a77cf088185" containerID="c7c51d505ec8647d45a099c5fe1e422e5bcd9776e93ec88296d94de0425b2c86" exitCode=0 Nov 23 07:30:02 crc kubenswrapper[5028]: I1123 07:30:02.209225 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" event={"ID":"2d2342d5-d133-44ae-957c-1a77cf088185","Type":"ContainerDied","Data":"c7c51d505ec8647d45a099c5fe1e422e5bcd9776e93ec88296d94de0425b2c86"} Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.528395 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.682965 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv29j\" (UniqueName: \"kubernetes.io/projected/2d2342d5-d133-44ae-957c-1a77cf088185-kube-api-access-bv29j\") pod \"2d2342d5-d133-44ae-957c-1a77cf088185\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.683122 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2342d5-d133-44ae-957c-1a77cf088185-secret-volume\") pod \"2d2342d5-d133-44ae-957c-1a77cf088185\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.683194 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2342d5-d133-44ae-957c-1a77cf088185-config-volume\") pod \"2d2342d5-d133-44ae-957c-1a77cf088185\" (UID: \"2d2342d5-d133-44ae-957c-1a77cf088185\") " Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.683814 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d2342d5-d133-44ae-957c-1a77cf088185-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d2342d5-d133-44ae-957c-1a77cf088185" (UID: "2d2342d5-d133-44ae-957c-1a77cf088185"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.689347 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2342d5-d133-44ae-957c-1a77cf088185-kube-api-access-bv29j" (OuterVolumeSpecName: "kube-api-access-bv29j") pod "2d2342d5-d133-44ae-957c-1a77cf088185" (UID: "2d2342d5-d133-44ae-957c-1a77cf088185"). InnerVolumeSpecName "kube-api-access-bv29j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.690202 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d2342d5-d133-44ae-957c-1a77cf088185-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d2342d5-d133-44ae-957c-1a77cf088185" (UID: "2d2342d5-d133-44ae-957c-1a77cf088185"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.785148 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv29j\" (UniqueName: \"kubernetes.io/projected/2d2342d5-d133-44ae-957c-1a77cf088185-kube-api-access-bv29j\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.785186 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d2342d5-d133-44ae-957c-1a77cf088185-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:03 crc kubenswrapper[5028]: I1123 07:30:03.785197 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d2342d5-d133-44ae-957c-1a77cf088185-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:30:04 crc kubenswrapper[5028]: I1123 07:30:04.226855 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" event={"ID":"2d2342d5-d133-44ae-957c-1a77cf088185","Type":"ContainerDied","Data":"8bb6456091a490c122cabf14a2493bb264057dcdb47c901c38ca8b572e5a6419"} Nov 23 07:30:04 crc kubenswrapper[5028]: I1123 07:30:04.226907 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb6456091a490c122cabf14a2493bb264057dcdb47c901c38ca8b572e5a6419" Nov 23 07:30:04 crc kubenswrapper[5028]: I1123 07:30:04.227034 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj" Nov 23 07:30:04 crc kubenswrapper[5028]: I1123 07:30:04.287739 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh"] Nov 23 07:30:04 crc kubenswrapper[5028]: I1123 07:30:04.292312 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398005-6sblh"] Nov 23 07:30:05 crc kubenswrapper[5028]: I1123 07:30:05.066716 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab3473d-0543-4160-8ad4-f262ec89e82b" path="/var/lib/kubelet/pods/fab3473d-0543-4160-8ad4-f262ec89e82b/volumes" Nov 23 07:30:23 crc kubenswrapper[5028]: I1123 07:30:23.112914 5028 scope.go:117] "RemoveContainer" containerID="26b150f57bfa4845a597b6145a89ff3ae12a7c21fc452e0f64daba97c9f81293" Nov 23 07:30:30 crc kubenswrapper[5028]: I1123 07:30:30.946388 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:30:30 crc kubenswrapper[5028]: I1123 07:30:30.946733 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:30:30 crc kubenswrapper[5028]: I1123 07:30:30.946800 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:30:30 crc kubenswrapper[5028]: I1123 07:30:30.947837 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:30:30 crc kubenswrapper[5028]: I1123 07:30:30.947939 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" gracePeriod=600 Nov 23 07:30:31 crc kubenswrapper[5028]: E1123 07:30:31.074269 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:30:31 crc kubenswrapper[5028]: I1123 07:30:31.482302 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" exitCode=0 Nov 23 07:30:31 crc kubenswrapper[5028]: I1123 07:30:31.482350 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b"} Nov 23 07:30:31 crc kubenswrapper[5028]: I1123 07:30:31.482394 5028 scope.go:117] "RemoveContainer" containerID="782f86c38923576d22672d1444874303a2956eb9b0c634e5bc6ab205f922aa0e" Nov 23 07:30:31 crc kubenswrapper[5028]: I1123 07:30:31.482916 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:30:31 crc kubenswrapper[5028]: E1123 07:30:31.483436 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:30:45 crc kubenswrapper[5028]: I1123 07:30:45.053667 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:30:45 crc kubenswrapper[5028]: E1123 07:30:45.054450 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:30:56 crc kubenswrapper[5028]: I1123 07:30:56.068348 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:30:56 crc kubenswrapper[5028]: E1123 07:30:56.069467 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:31:08 crc kubenswrapper[5028]: I1123 07:31:08.054099 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:31:08 crc kubenswrapper[5028]: E1123 07:31:08.055295 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:31:21 crc kubenswrapper[5028]: I1123 07:31:21.053657 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:31:21 crc kubenswrapper[5028]: E1123 07:31:21.054554 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:31:35 crc kubenswrapper[5028]: I1123 07:31:35.053176 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:31:35 crc kubenswrapper[5028]: E1123 07:31:35.053852 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:31:46 crc kubenswrapper[5028]: I1123 07:31:46.052896 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:31:46 crc kubenswrapper[5028]: E1123 07:31:46.053856 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:32:01 crc kubenswrapper[5028]: I1123 07:32:01.053247 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:32:01 crc kubenswrapper[5028]: E1123 07:32:01.053864 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:32:12 crc kubenswrapper[5028]: I1123 07:32:12.053708 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:32:12 crc kubenswrapper[5028]: E1123 07:32:12.055376 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:32:23 crc kubenswrapper[5028]: I1123 07:32:23.053428 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:32:23 crc kubenswrapper[5028]: E1123 07:32:23.054152 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:32:34 crc kubenswrapper[5028]: I1123 07:32:34.066546 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:32:34 crc kubenswrapper[5028]: E1123 07:32:34.067586 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:32:49 crc kubenswrapper[5028]: I1123 07:32:49.053405 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:32:49 crc kubenswrapper[5028]: E1123 07:32:49.054419 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:33:01 crc kubenswrapper[5028]: I1123 07:33:01.053405 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:33:01 crc kubenswrapper[5028]: E1123 07:33:01.054754 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:33:15 crc kubenswrapper[5028]: I1123 07:33:15.053093 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:33:15 crc kubenswrapper[5028]: E1123 07:33:15.054489 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:33:27 crc kubenswrapper[5028]: I1123 07:33:27.056376 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:33:27 crc kubenswrapper[5028]: E1123 07:33:27.057228 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:33:38 crc kubenswrapper[5028]: I1123 07:33:38.052882 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:33:38 crc kubenswrapper[5028]: E1123 07:33:38.054043 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:33:53 crc kubenswrapper[5028]: I1123 07:33:53.053586 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:33:53 crc kubenswrapper[5028]: E1123 07:33:53.054346 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:34:07 crc kubenswrapper[5028]: I1123 07:34:07.062682 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:34:07 crc kubenswrapper[5028]: E1123 07:34:07.063679 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:34:19 crc kubenswrapper[5028]: I1123 07:34:19.053074 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:34:19 crc kubenswrapper[5028]: E1123 07:34:19.054215 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:34:33 crc kubenswrapper[5028]: I1123 07:34:33.053558 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:34:33 crc kubenswrapper[5028]: E1123 07:34:33.054385 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:34:47 crc kubenswrapper[5028]: I1123 07:34:47.056899 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:34:47 crc kubenswrapper[5028]: E1123 07:34:47.057633 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:34:58 crc kubenswrapper[5028]: I1123 07:34:58.053270 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:34:58 crc kubenswrapper[5028]: E1123 07:34:58.054045 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:35:13 crc kubenswrapper[5028]: I1123 07:35:13.054716 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:35:13 crc kubenswrapper[5028]: E1123 07:35:13.055795 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:35:27 crc kubenswrapper[5028]: I1123 07:35:27.061349 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:35:27 crc kubenswrapper[5028]: E1123 07:35:27.062352 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:35:41 crc kubenswrapper[5028]: I1123 07:35:41.054487 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:35:41 crc kubenswrapper[5028]: I1123 07:35:41.594952 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"b865f7bdf18e540f0e634f75e1e714bd143cad1e3fd21f05d39f3f5b3ca6246f"} Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.415498 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4jlll"] Nov 23 07:35:47 crc kubenswrapper[5028]: E1123 07:35:47.416409 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2342d5-d133-44ae-957c-1a77cf088185" containerName="collect-profiles" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.416427 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2342d5-d133-44ae-957c-1a77cf088185" containerName="collect-profiles" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.416598 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2342d5-d133-44ae-957c-1a77cf088185" containerName="collect-profiles" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.417773 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.427784 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jlll"] Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.562916 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-utilities\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.563241 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtzsn\" (UniqueName: \"kubernetes.io/projected/e4111531-f0ce-4284-a26d-f5d8946f89bd-kube-api-access-qtzsn\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.563525 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-catalog-content\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.664411 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-utilities\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.664461 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtzsn\" (UniqueName: \"kubernetes.io/projected/e4111531-f0ce-4284-a26d-f5d8946f89bd-kube-api-access-qtzsn\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.664508 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-catalog-content\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.665215 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-catalog-content\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.665263 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-utilities\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.683990 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtzsn\" (UniqueName: \"kubernetes.io/projected/e4111531-f0ce-4284-a26d-f5d8946f89bd-kube-api-access-qtzsn\") pod \"community-operators-4jlll\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:47 crc kubenswrapper[5028]: I1123 07:35:47.737483 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:48 crc kubenswrapper[5028]: I1123 07:35:48.259731 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jlll"] Nov 23 07:35:48 crc kubenswrapper[5028]: I1123 07:35:48.646408 5028 generic.go:334] "Generic (PLEG): container finished" podID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerID="451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96" exitCode=0 Nov 23 07:35:48 crc kubenswrapper[5028]: I1123 07:35:48.646511 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerDied","Data":"451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96"} Nov 23 07:35:48 crc kubenswrapper[5028]: I1123 07:35:48.646753 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerStarted","Data":"d4d2ec84b1c745002a0d57331b77ea9b16c5e9a5ed7cc99bcf108a09c1ff6c70"} Nov 23 07:35:48 crc kubenswrapper[5028]: I1123 07:35:48.648542 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:35:49 crc kubenswrapper[5028]: I1123 07:35:49.655577 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerStarted","Data":"3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137"} Nov 23 07:35:50 crc kubenswrapper[5028]: I1123 07:35:50.665873 5028 generic.go:334] "Generic (PLEG): container finished" podID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerID="3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137" exitCode=0 Nov 23 07:35:50 crc kubenswrapper[5028]: I1123 07:35:50.666004 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerDied","Data":"3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137"} Nov 23 07:35:51 crc kubenswrapper[5028]: I1123 07:35:51.679298 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerStarted","Data":"c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347"} Nov 23 07:35:51 crc kubenswrapper[5028]: I1123 07:35:51.705815 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4jlll" podStartSLOduration=2.279056838 podStartE2EDuration="4.705789636s" podCreationTimestamp="2025-11-23 07:35:47 +0000 UTC" firstStartedPulling="2025-11-23 07:35:48.648242446 +0000 UTC m=+2732.345647235" lastFinishedPulling="2025-11-23 07:35:51.074975254 +0000 UTC m=+2734.772380033" observedRunningTime="2025-11-23 07:35:51.703898129 +0000 UTC m=+2735.401302968" watchObservedRunningTime="2025-11-23 07:35:51.705789636 +0000 UTC m=+2735.403194415" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.738371 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.739064 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.766087 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8b76"] Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.767936 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.788596 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8b76"] Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.842267 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.868286 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-utilities\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.868562 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-catalog-content\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.868599 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2477p\" (UniqueName: \"kubernetes.io/projected/2672ef0e-4846-4948-b1e3-c628e8d1e347-kube-api-access-2477p\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.901818 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.969705 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-utilities\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.969791 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-catalog-content\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.969819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2477p\" (UniqueName: \"kubernetes.io/projected/2672ef0e-4846-4948-b1e3-c628e8d1e347-kube-api-access-2477p\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.970476 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-catalog-content\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.970566 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-utilities\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:57 crc kubenswrapper[5028]: I1123 07:35:57.991699 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2477p\" (UniqueName: \"kubernetes.io/projected/2672ef0e-4846-4948-b1e3-c628e8d1e347-kube-api-access-2477p\") pod \"certified-operators-x8b76\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:58 crc kubenswrapper[5028]: I1123 07:35:58.152890 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:35:58 crc kubenswrapper[5028]: I1123 07:35:58.641831 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8b76"] Nov 23 07:35:58 crc kubenswrapper[5028]: I1123 07:35:58.837112 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerStarted","Data":"f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3"} Nov 23 07:35:58 crc kubenswrapper[5028]: I1123 07:35:58.837777 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerStarted","Data":"49505508d1bf122d17cccb119e01bfc703b83518402259341e2e0a58092a930a"} Nov 23 07:35:59 crc kubenswrapper[5028]: I1123 07:35:59.846488 5028 generic.go:334] "Generic (PLEG): container finished" podID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerID="f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3" exitCode=0 Nov 23 07:35:59 crc kubenswrapper[5028]: I1123 07:35:59.846571 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerDied","Data":"f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3"} Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.137845 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jlll"] Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.138345 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4jlll" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="registry-server" containerID="cri-o://c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347" gracePeriod=2 Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.598409 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.713595 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-utilities\") pod \"e4111531-f0ce-4284-a26d-f5d8946f89bd\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.713663 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtzsn\" (UniqueName: \"kubernetes.io/projected/e4111531-f0ce-4284-a26d-f5d8946f89bd-kube-api-access-qtzsn\") pod \"e4111531-f0ce-4284-a26d-f5d8946f89bd\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.713733 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-catalog-content\") pod \"e4111531-f0ce-4284-a26d-f5d8946f89bd\" (UID: \"e4111531-f0ce-4284-a26d-f5d8946f89bd\") " Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.714468 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-utilities" (OuterVolumeSpecName: "utilities") pod "e4111531-f0ce-4284-a26d-f5d8946f89bd" (UID: "e4111531-f0ce-4284-a26d-f5d8946f89bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.722146 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4111531-f0ce-4284-a26d-f5d8946f89bd-kube-api-access-qtzsn" (OuterVolumeSpecName: "kube-api-access-qtzsn") pod "e4111531-f0ce-4284-a26d-f5d8946f89bd" (UID: "e4111531-f0ce-4284-a26d-f5d8946f89bd"). InnerVolumeSpecName "kube-api-access-qtzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.766840 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4111531-f0ce-4284-a26d-f5d8946f89bd" (UID: "e4111531-f0ce-4284-a26d-f5d8946f89bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.816394 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtzsn\" (UniqueName: \"kubernetes.io/projected/e4111531-f0ce-4284-a26d-f5d8946f89bd-kube-api-access-qtzsn\") on node \"crc\" DevicePath \"\"" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.816440 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.816459 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4111531-f0ce-4284-a26d-f5d8946f89bd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.857249 5028 generic.go:334] "Generic (PLEG): container finished" podID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerID="c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347" exitCode=0 Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.857313 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerDied","Data":"c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347"} Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.857325 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jlll" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.857364 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jlll" event={"ID":"e4111531-f0ce-4284-a26d-f5d8946f89bd","Type":"ContainerDied","Data":"d4d2ec84b1c745002a0d57331b77ea9b16c5e9a5ed7cc99bcf108a09c1ff6c70"} Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.857399 5028 scope.go:117] "RemoveContainer" containerID="c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.890855 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jlll"] Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.894792 5028 scope.go:117] "RemoveContainer" containerID="3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.898531 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4jlll"] Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.915365 5028 scope.go:117] "RemoveContainer" containerID="451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.951371 5028 scope.go:117] "RemoveContainer" containerID="c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347" Nov 23 07:36:00 crc kubenswrapper[5028]: E1123 07:36:00.951972 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347\": container with ID starting with c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347 not found: ID does not exist" containerID="c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.953726 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347"} err="failed to get container status \"c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347\": rpc error: code = NotFound desc = could not find container \"c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347\": container with ID starting with c0004e8789c43527c3df6a91aba07e32dc900b4dfd006647183232a10192c347 not found: ID does not exist" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.953831 5028 scope.go:117] "RemoveContainer" containerID="3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137" Nov 23 07:36:00 crc kubenswrapper[5028]: E1123 07:36:00.954359 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137\": container with ID starting with 3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137 not found: ID does not exist" containerID="3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.954392 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137"} err="failed to get container status \"3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137\": rpc error: code = NotFound desc = could not find container \"3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137\": container with ID starting with 3fb927b2b45656a40229530a0fbfaf2fdf3a0ece37edc4bf8d9e83a3fcbb6137 not found: ID does not exist" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.954411 5028 scope.go:117] "RemoveContainer" containerID="451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96" Nov 23 07:36:00 crc kubenswrapper[5028]: E1123 07:36:00.954695 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96\": container with ID starting with 451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96 not found: ID does not exist" containerID="451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96" Nov 23 07:36:00 crc kubenswrapper[5028]: I1123 07:36:00.954719 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96"} err="failed to get container status \"451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96\": rpc error: code = NotFound desc = could not find container \"451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96\": container with ID starting with 451114a12957f17862768dacf7f7cefd48dccfe25ab8bde564ff22bada877b96 not found: ID does not exist" Nov 23 07:36:01 crc kubenswrapper[5028]: I1123 07:36:01.069054 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" path="/var/lib/kubelet/pods/e4111531-f0ce-4284-a26d-f5d8946f89bd/volumes" Nov 23 07:36:01 crc kubenswrapper[5028]: I1123 07:36:01.869878 5028 generic.go:334] "Generic (PLEG): container finished" podID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerID="24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01" exitCode=0 Nov 23 07:36:01 crc kubenswrapper[5028]: I1123 07:36:01.869938 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerDied","Data":"24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01"} Nov 23 07:36:02 crc kubenswrapper[5028]: I1123 07:36:02.883523 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerStarted","Data":"c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376"} Nov 23 07:36:02 crc kubenswrapper[5028]: I1123 07:36:02.903731 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8b76" podStartSLOduration=3.5018852259999997 podStartE2EDuration="5.903701875s" podCreationTimestamp="2025-11-23 07:35:57 +0000 UTC" firstStartedPulling="2025-11-23 07:35:59.850734188 +0000 UTC m=+2743.548139007" lastFinishedPulling="2025-11-23 07:36:02.252550877 +0000 UTC m=+2745.949955656" observedRunningTime="2025-11-23 07:36:02.900345173 +0000 UTC m=+2746.597749962" watchObservedRunningTime="2025-11-23 07:36:02.903701875 +0000 UTC m=+2746.601106694" Nov 23 07:36:08 crc kubenswrapper[5028]: I1123 07:36:08.153475 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:36:08 crc kubenswrapper[5028]: I1123 07:36:08.153830 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:36:08 crc kubenswrapper[5028]: I1123 07:36:08.231042 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:36:09 crc kubenswrapper[5028]: I1123 07:36:09.018800 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:36:09 crc kubenswrapper[5028]: I1123 07:36:09.085506 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8b76"] Nov 23 07:36:10 crc kubenswrapper[5028]: I1123 07:36:10.960364 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8b76" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="registry-server" containerID="cri-o://c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376" gracePeriod=2 Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.377768 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.485378 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-catalog-content\") pod \"2672ef0e-4846-4948-b1e3-c628e8d1e347\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.485461 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2477p\" (UniqueName: \"kubernetes.io/projected/2672ef0e-4846-4948-b1e3-c628e8d1e347-kube-api-access-2477p\") pod \"2672ef0e-4846-4948-b1e3-c628e8d1e347\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.485522 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-utilities\") pod \"2672ef0e-4846-4948-b1e3-c628e8d1e347\" (UID: \"2672ef0e-4846-4948-b1e3-c628e8d1e347\") " Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.486770 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-utilities" (OuterVolumeSpecName: "utilities") pod "2672ef0e-4846-4948-b1e3-c628e8d1e347" (UID: "2672ef0e-4846-4948-b1e3-c628e8d1e347"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.498225 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2672ef0e-4846-4948-b1e3-c628e8d1e347-kube-api-access-2477p" (OuterVolumeSpecName: "kube-api-access-2477p") pod "2672ef0e-4846-4948-b1e3-c628e8d1e347" (UID: "2672ef0e-4846-4948-b1e3-c628e8d1e347"). InnerVolumeSpecName "kube-api-access-2477p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.547873 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2672ef0e-4846-4948-b1e3-c628e8d1e347" (UID: "2672ef0e-4846-4948-b1e3-c628e8d1e347"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.587144 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.587177 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2477p\" (UniqueName: \"kubernetes.io/projected/2672ef0e-4846-4948-b1e3-c628e8d1e347-kube-api-access-2477p\") on node \"crc\" DevicePath \"\"" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.587188 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2672ef0e-4846-4948-b1e3-c628e8d1e347-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.970551 5028 generic.go:334] "Generic (PLEG): container finished" podID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerID="c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376" exitCode=0 Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.970603 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerDied","Data":"c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376"} Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.970631 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8b76" event={"ID":"2672ef0e-4846-4948-b1e3-c628e8d1e347","Type":"ContainerDied","Data":"49505508d1bf122d17cccb119e01bfc703b83518402259341e2e0a58092a930a"} Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.970651 5028 scope.go:117] "RemoveContainer" containerID="c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.970774 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8b76" Nov 23 07:36:11 crc kubenswrapper[5028]: I1123 07:36:11.999552 5028 scope.go:117] "RemoveContainer" containerID="24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.016134 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8b76"] Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.020795 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8b76"] Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.044774 5028 scope.go:117] "RemoveContainer" containerID="f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.064107 5028 scope.go:117] "RemoveContainer" containerID="c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376" Nov 23 07:36:12 crc kubenswrapper[5028]: E1123 07:36:12.064505 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376\": container with ID starting with c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376 not found: ID does not exist" containerID="c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.064559 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376"} err="failed to get container status \"c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376\": rpc error: code = NotFound desc = could not find container \"c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376\": container with ID starting with c2d9a10c5ab2c3ab89ed83029facea94fbfec879e93e320ba412d1b375bb7376 not found: ID does not exist" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.064597 5028 scope.go:117] "RemoveContainer" containerID="24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01" Nov 23 07:36:12 crc kubenswrapper[5028]: E1123 07:36:12.064933 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01\": container with ID starting with 24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01 not found: ID does not exist" containerID="24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.064982 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01"} err="failed to get container status \"24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01\": rpc error: code = NotFound desc = could not find container \"24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01\": container with ID starting with 24f444b1a56327deffff3af595e48f71199cc8ba891577353a0709f167011b01 not found: ID does not exist" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.065005 5028 scope.go:117] "RemoveContainer" containerID="f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3" Nov 23 07:36:12 crc kubenswrapper[5028]: E1123 07:36:12.065363 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3\": container with ID starting with f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3 not found: ID does not exist" containerID="f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3" Nov 23 07:36:12 crc kubenswrapper[5028]: I1123 07:36:12.065408 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3"} err="failed to get container status \"f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3\": rpc error: code = NotFound desc = could not find container \"f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3\": container with ID starting with f0e1aae0226d908c542a0d3c3a34ed432d15e91ac52c7e68d7faec42140e0de3 not found: ID does not exist" Nov 23 07:36:13 crc kubenswrapper[5028]: I1123 07:36:13.065123 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" path="/var/lib/kubelet/pods/2672ef0e-4846-4948-b1e3-c628e8d1e347/volumes" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.007416 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75nbc"] Nov 23 07:37:06 crc kubenswrapper[5028]: E1123 07:37:06.008997 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="registry-server" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009049 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="registry-server" Nov 23 07:37:06 crc kubenswrapper[5028]: E1123 07:37:06.009070 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="registry-server" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009078 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="registry-server" Nov 23 07:37:06 crc kubenswrapper[5028]: E1123 07:37:06.009490 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="extract-utilities" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009506 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="extract-utilities" Nov 23 07:37:06 crc kubenswrapper[5028]: E1123 07:37:06.009555 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="extract-content" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009564 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="extract-content" Nov 23 07:37:06 crc kubenswrapper[5028]: E1123 07:37:06.009586 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="extract-content" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009594 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="extract-content" Nov 23 07:37:06 crc kubenswrapper[5028]: E1123 07:37:06.009613 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="extract-utilities" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009621 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="extract-utilities" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009795 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4111531-f0ce-4284-a26d-f5d8946f89bd" containerName="registry-server" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.009822 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2672ef0e-4846-4948-b1e3-c628e8d1e347" containerName="registry-server" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.010877 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.012577 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75nbc"] Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.073589 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-utilities\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.073664 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78g5n\" (UniqueName: \"kubernetes.io/projected/7b0da347-fe87-4c1b-85db-b4abb502d901-kube-api-access-78g5n\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.073771 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-catalog-content\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.175200 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-utilities\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.175272 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78g5n\" (UniqueName: \"kubernetes.io/projected/7b0da347-fe87-4c1b-85db-b4abb502d901-kube-api-access-78g5n\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.175296 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-catalog-content\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.175994 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-catalog-content\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.176005 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-utilities\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.202247 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78g5n\" (UniqueName: \"kubernetes.io/projected/7b0da347-fe87-4c1b-85db-b4abb502d901-kube-api-access-78g5n\") pod \"redhat-operators-75nbc\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.344187 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:06 crc kubenswrapper[5028]: I1123 07:37:06.817327 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75nbc"] Nov 23 07:37:07 crc kubenswrapper[5028]: I1123 07:37:07.446044 5028 generic.go:334] "Generic (PLEG): container finished" podID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerID="64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f" exitCode=0 Nov 23 07:37:07 crc kubenswrapper[5028]: I1123 07:37:07.446097 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerDied","Data":"64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f"} Nov 23 07:37:07 crc kubenswrapper[5028]: I1123 07:37:07.446321 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerStarted","Data":"eb00959d2767d5248259d0122b4564852c96361bd13edbddb9e8458515675065"} Nov 23 07:37:08 crc kubenswrapper[5028]: I1123 07:37:08.456740 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerStarted","Data":"3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab"} Nov 23 07:37:09 crc kubenswrapper[5028]: I1123 07:37:09.468775 5028 generic.go:334] "Generic (PLEG): container finished" podID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerID="3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab" exitCode=0 Nov 23 07:37:09 crc kubenswrapper[5028]: I1123 07:37:09.468816 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerDied","Data":"3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab"} Nov 23 07:37:10 crc kubenswrapper[5028]: I1123 07:37:10.477763 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerStarted","Data":"aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965"} Nov 23 07:37:10 crc kubenswrapper[5028]: I1123 07:37:10.499218 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75nbc" podStartSLOduration=3.084176034 podStartE2EDuration="5.499196477s" podCreationTimestamp="2025-11-23 07:37:05 +0000 UTC" firstStartedPulling="2025-11-23 07:37:07.447407796 +0000 UTC m=+2811.144812575" lastFinishedPulling="2025-11-23 07:37:09.862428239 +0000 UTC m=+2813.559833018" observedRunningTime="2025-11-23 07:37:10.493632411 +0000 UTC m=+2814.191037210" watchObservedRunningTime="2025-11-23 07:37:10.499196477 +0000 UTC m=+2814.196601256" Nov 23 07:37:16 crc kubenswrapper[5028]: I1123 07:37:16.345171 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:16 crc kubenswrapper[5028]: I1123 07:37:16.345817 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:16 crc kubenswrapper[5028]: I1123 07:37:16.405949 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:16 crc kubenswrapper[5028]: I1123 07:37:16.584148 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:17 crc kubenswrapper[5028]: I1123 07:37:17.590317 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75nbc"] Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.539138 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-75nbc" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="registry-server" containerID="cri-o://aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965" gracePeriod=2 Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.907390 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.947833 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-catalog-content\") pod \"7b0da347-fe87-4c1b-85db-b4abb502d901\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.947909 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-utilities\") pod \"7b0da347-fe87-4c1b-85db-b4abb502d901\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.947935 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78g5n\" (UniqueName: \"kubernetes.io/projected/7b0da347-fe87-4c1b-85db-b4abb502d901-kube-api-access-78g5n\") pod \"7b0da347-fe87-4c1b-85db-b4abb502d901\" (UID: \"7b0da347-fe87-4c1b-85db-b4abb502d901\") " Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.949600 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-utilities" (OuterVolumeSpecName: "utilities") pod "7b0da347-fe87-4c1b-85db-b4abb502d901" (UID: "7b0da347-fe87-4c1b-85db-b4abb502d901"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:37:18 crc kubenswrapper[5028]: I1123 07:37:18.955353 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0da347-fe87-4c1b-85db-b4abb502d901-kube-api-access-78g5n" (OuterVolumeSpecName: "kube-api-access-78g5n") pod "7b0da347-fe87-4c1b-85db-b4abb502d901" (UID: "7b0da347-fe87-4c1b-85db-b4abb502d901"). InnerVolumeSpecName "kube-api-access-78g5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.048815 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.048846 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78g5n\" (UniqueName: \"kubernetes.io/projected/7b0da347-fe87-4c1b-85db-b4abb502d901-kube-api-access-78g5n\") on node \"crc\" DevicePath \"\"" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.547567 5028 generic.go:334] "Generic (PLEG): container finished" podID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerID="aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965" exitCode=0 Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.547655 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerDied","Data":"aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965"} Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.547741 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75nbc" event={"ID":"7b0da347-fe87-4c1b-85db-b4abb502d901","Type":"ContainerDied","Data":"eb00959d2767d5248259d0122b4564852c96361bd13edbddb9e8458515675065"} Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.547778 5028 scope.go:117] "RemoveContainer" containerID="aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.547670 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75nbc" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.576109 5028 scope.go:117] "RemoveContainer" containerID="3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.593078 5028 scope.go:117] "RemoveContainer" containerID="64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.633635 5028 scope.go:117] "RemoveContainer" containerID="aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965" Nov 23 07:37:19 crc kubenswrapper[5028]: E1123 07:37:19.634127 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965\": container with ID starting with aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965 not found: ID does not exist" containerID="aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.634166 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965"} err="failed to get container status \"aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965\": rpc error: code = NotFound desc = could not find container \"aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965\": container with ID starting with aee48dc2326dbd144a46a8e28c6ed8e966ade1df26d67092e0f5f32e0b9c8965 not found: ID does not exist" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.634192 5028 scope.go:117] "RemoveContainer" containerID="3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab" Nov 23 07:37:19 crc kubenswrapper[5028]: E1123 07:37:19.634558 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab\": container with ID starting with 3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab not found: ID does not exist" containerID="3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.634590 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab"} err="failed to get container status \"3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab\": rpc error: code = NotFound desc = could not find container \"3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab\": container with ID starting with 3ad38f627609fcd7862b79dcc008905e875fa736eeba728dd79133cd73e541ab not found: ID does not exist" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.634609 5028 scope.go:117] "RemoveContainer" containerID="64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f" Nov 23 07:37:19 crc kubenswrapper[5028]: E1123 07:37:19.635010 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f\": container with ID starting with 64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f not found: ID does not exist" containerID="64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f" Nov 23 07:37:19 crc kubenswrapper[5028]: I1123 07:37:19.635037 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f"} err="failed to get container status \"64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f\": rpc error: code = NotFound desc = could not find container \"64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f\": container with ID starting with 64b5bf7b2f4fbfc0979673ce7862dd535228069d509e047753c91664715d6b1f not found: ID does not exist" Nov 23 07:37:20 crc kubenswrapper[5028]: I1123 07:37:20.105431 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b0da347-fe87-4c1b-85db-b4abb502d901" (UID: "7b0da347-fe87-4c1b-85db-b4abb502d901"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:37:20 crc kubenswrapper[5028]: I1123 07:37:20.163244 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0da347-fe87-4c1b-85db-b4abb502d901-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:37:20 crc kubenswrapper[5028]: I1123 07:37:20.182157 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-75nbc"] Nov 23 07:37:20 crc kubenswrapper[5028]: I1123 07:37:20.191391 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-75nbc"] Nov 23 07:37:21 crc kubenswrapper[5028]: I1123 07:37:21.067324 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" path="/var/lib/kubelet/pods/7b0da347-fe87-4c1b-85db-b4abb502d901/volumes" Nov 23 07:38:00 crc kubenswrapper[5028]: I1123 07:38:00.946455 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:38:00 crc kubenswrapper[5028]: I1123 07:38:00.946994 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:38:30 crc kubenswrapper[5028]: I1123 07:38:30.946827 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:38:30 crc kubenswrapper[5028]: I1123 07:38:30.947270 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.826836 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpxz5"] Nov 23 07:38:54 crc kubenswrapper[5028]: E1123 07:38:54.828156 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="registry-server" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.828177 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="registry-server" Nov 23 07:38:54 crc kubenswrapper[5028]: E1123 07:38:54.828211 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="extract-utilities" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.828219 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="extract-utilities" Nov 23 07:38:54 crc kubenswrapper[5028]: E1123 07:38:54.828230 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="extract-content" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.828239 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="extract-content" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.828423 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0da347-fe87-4c1b-85db-b4abb502d901" containerName="registry-server" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.829695 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.877198 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpxz5"] Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.980798 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-utilities\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.980888 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m44c2\" (UniqueName: \"kubernetes.io/projected/6af68257-b3d0-455f-8350-a08bdcc34139-kube-api-access-m44c2\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:54 crc kubenswrapper[5028]: I1123 07:38:54.980940 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-catalog-content\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.083012 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-catalog-content\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.083411 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-utilities\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.083508 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m44c2\" (UniqueName: \"kubernetes.io/projected/6af68257-b3d0-455f-8350-a08bdcc34139-kube-api-access-m44c2\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.083582 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-catalog-content\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.083799 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-utilities\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.102692 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m44c2\" (UniqueName: \"kubernetes.io/projected/6af68257-b3d0-455f-8350-a08bdcc34139-kube-api-access-m44c2\") pod \"redhat-marketplace-jpxz5\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.192005 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.612369 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpxz5"] Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.802161 5028 generic.go:334] "Generic (PLEG): container finished" podID="6af68257-b3d0-455f-8350-a08bdcc34139" containerID="0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b" exitCode=0 Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.802336 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpxz5" event={"ID":"6af68257-b3d0-455f-8350-a08bdcc34139","Type":"ContainerDied","Data":"0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b"} Nov 23 07:38:55 crc kubenswrapper[5028]: I1123 07:38:55.802500 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpxz5" event={"ID":"6af68257-b3d0-455f-8350-a08bdcc34139","Type":"ContainerStarted","Data":"76adc55a53f925402d334d116f30ed78d416da7aeb0fdbb69f6df5cd7fe712c0"} Nov 23 07:38:56 crc kubenswrapper[5028]: I1123 07:38:56.824586 5028 generic.go:334] "Generic (PLEG): container finished" podID="6af68257-b3d0-455f-8350-a08bdcc34139" containerID="ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927" exitCode=0 Nov 23 07:38:56 crc kubenswrapper[5028]: I1123 07:38:56.824661 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpxz5" event={"ID":"6af68257-b3d0-455f-8350-a08bdcc34139","Type":"ContainerDied","Data":"ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927"} Nov 23 07:38:57 crc kubenswrapper[5028]: I1123 07:38:57.834483 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpxz5" event={"ID":"6af68257-b3d0-455f-8350-a08bdcc34139","Type":"ContainerStarted","Data":"141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c"} Nov 23 07:38:57 crc kubenswrapper[5028]: I1123 07:38:57.852913 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpxz5" podStartSLOduration=2.360824884 podStartE2EDuration="3.852895431s" podCreationTimestamp="2025-11-23 07:38:54 +0000 UTC" firstStartedPulling="2025-11-23 07:38:55.805536398 +0000 UTC m=+2919.502941177" lastFinishedPulling="2025-11-23 07:38:57.297606935 +0000 UTC m=+2920.995011724" observedRunningTime="2025-11-23 07:38:57.851328783 +0000 UTC m=+2921.548733572" watchObservedRunningTime="2025-11-23 07:38:57.852895431 +0000 UTC m=+2921.550300210" Nov 23 07:39:00 crc kubenswrapper[5028]: I1123 07:39:00.946422 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:39:00 crc kubenswrapper[5028]: I1123 07:39:00.946750 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:39:00 crc kubenswrapper[5028]: I1123 07:39:00.946806 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:39:00 crc kubenswrapper[5028]: I1123 07:39:00.947614 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b865f7bdf18e540f0e634f75e1e714bd143cad1e3fd21f05d39f3f5b3ca6246f"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:39:00 crc kubenswrapper[5028]: I1123 07:39:00.947678 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://b865f7bdf18e540f0e634f75e1e714bd143cad1e3fd21f05d39f3f5b3ca6246f" gracePeriod=600 Nov 23 07:39:01 crc kubenswrapper[5028]: I1123 07:39:01.879311 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="b865f7bdf18e540f0e634f75e1e714bd143cad1e3fd21f05d39f3f5b3ca6246f" exitCode=0 Nov 23 07:39:01 crc kubenswrapper[5028]: I1123 07:39:01.880275 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"b865f7bdf18e540f0e634f75e1e714bd143cad1e3fd21f05d39f3f5b3ca6246f"} Nov 23 07:39:01 crc kubenswrapper[5028]: I1123 07:39:01.880359 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363"} Nov 23 07:39:01 crc kubenswrapper[5028]: I1123 07:39:01.880420 5028 scope.go:117] "RemoveContainer" containerID="d4267a4ae077dfe8bf5dc4a8cf3ee5428b2d9419d74ed98014582071e50b023b" Nov 23 07:39:05 crc kubenswrapper[5028]: I1123 07:39:05.192553 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:39:05 crc kubenswrapper[5028]: I1123 07:39:05.192876 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:39:05 crc kubenswrapper[5028]: I1123 07:39:05.258426 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:39:05 crc kubenswrapper[5028]: I1123 07:39:05.986258 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:39:06 crc kubenswrapper[5028]: I1123 07:39:06.036088 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpxz5"] Nov 23 07:39:07 crc kubenswrapper[5028]: I1123 07:39:07.940077 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpxz5" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="registry-server" containerID="cri-o://141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c" gracePeriod=2 Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.389596 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.502405 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-catalog-content\") pod \"6af68257-b3d0-455f-8350-a08bdcc34139\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.502804 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m44c2\" (UniqueName: \"kubernetes.io/projected/6af68257-b3d0-455f-8350-a08bdcc34139-kube-api-access-m44c2\") pod \"6af68257-b3d0-455f-8350-a08bdcc34139\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.502997 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-utilities\") pod \"6af68257-b3d0-455f-8350-a08bdcc34139\" (UID: \"6af68257-b3d0-455f-8350-a08bdcc34139\") " Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.504083 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-utilities" (OuterVolumeSpecName: "utilities") pod "6af68257-b3d0-455f-8350-a08bdcc34139" (UID: "6af68257-b3d0-455f-8350-a08bdcc34139"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.508138 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af68257-b3d0-455f-8350-a08bdcc34139-kube-api-access-m44c2" (OuterVolumeSpecName: "kube-api-access-m44c2") pod "6af68257-b3d0-455f-8350-a08bdcc34139" (UID: "6af68257-b3d0-455f-8350-a08bdcc34139"). InnerVolumeSpecName "kube-api-access-m44c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.523824 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6af68257-b3d0-455f-8350-a08bdcc34139" (UID: "6af68257-b3d0-455f-8350-a08bdcc34139"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.604306 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.604502 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m44c2\" (UniqueName: \"kubernetes.io/projected/6af68257-b3d0-455f-8350-a08bdcc34139-kube-api-access-m44c2\") on node \"crc\" DevicePath \"\"" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.604660 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af68257-b3d0-455f-8350-a08bdcc34139-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.949369 5028 generic.go:334] "Generic (PLEG): container finished" podID="6af68257-b3d0-455f-8350-a08bdcc34139" containerID="141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c" exitCode=0 Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.949429 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpxz5" event={"ID":"6af68257-b3d0-455f-8350-a08bdcc34139","Type":"ContainerDied","Data":"141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c"} Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.949528 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpxz5" event={"ID":"6af68257-b3d0-455f-8350-a08bdcc34139","Type":"ContainerDied","Data":"76adc55a53f925402d334d116f30ed78d416da7aeb0fdbb69f6df5cd7fe712c0"} Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.949518 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpxz5" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.949553 5028 scope.go:117] "RemoveContainer" containerID="141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.973198 5028 scope.go:117] "RemoveContainer" containerID="ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927" Nov 23 07:39:08 crc kubenswrapper[5028]: I1123 07:39:08.994449 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpxz5"] Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.002919 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpxz5"] Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.011670 5028 scope.go:117] "RemoveContainer" containerID="0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.043740 5028 scope.go:117] "RemoveContainer" containerID="141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c" Nov 23 07:39:09 crc kubenswrapper[5028]: E1123 07:39:09.044331 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c\": container with ID starting with 141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c not found: ID does not exist" containerID="141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.044430 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c"} err="failed to get container status \"141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c\": rpc error: code = NotFound desc = could not find container \"141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c\": container with ID starting with 141abe891e27f3d0fc18c65c5216b76bca8d502d5da1c185fcb74bc4618b866c not found: ID does not exist" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.044490 5028 scope.go:117] "RemoveContainer" containerID="ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927" Nov 23 07:39:09 crc kubenswrapper[5028]: E1123 07:39:09.044974 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927\": container with ID starting with ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927 not found: ID does not exist" containerID="ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.045012 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927"} err="failed to get container status \"ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927\": rpc error: code = NotFound desc = could not find container \"ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927\": container with ID starting with ab3f856f05929f51882cef7eed8a7d1f2b1d2c775b038a51ed1b248173b70927 not found: ID does not exist" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.045035 5028 scope.go:117] "RemoveContainer" containerID="0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b" Nov 23 07:39:09 crc kubenswrapper[5028]: E1123 07:39:09.045322 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b\": container with ID starting with 0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b not found: ID does not exist" containerID="0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.045368 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b"} err="failed to get container status \"0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b\": rpc error: code = NotFound desc = could not find container \"0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b\": container with ID starting with 0b6cfdd0adbd8e6014687e13e46f05dd1120a422517e63638505acb7f7873b4b not found: ID does not exist" Nov 23 07:39:09 crc kubenswrapper[5028]: I1123 07:39:09.061261 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" path="/var/lib/kubelet/pods/6af68257-b3d0-455f-8350-a08bdcc34139/volumes" Nov 23 07:41:30 crc kubenswrapper[5028]: I1123 07:41:30.946252 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:41:30 crc kubenswrapper[5028]: I1123 07:41:30.946847 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:42:00 crc kubenswrapper[5028]: I1123 07:42:00.945881 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:42:00 crc kubenswrapper[5028]: I1123 07:42:00.946513 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:42:30 crc kubenswrapper[5028]: I1123 07:42:30.946509 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:42:30 crc kubenswrapper[5028]: I1123 07:42:30.947167 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:42:30 crc kubenswrapper[5028]: I1123 07:42:30.947221 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:42:30 crc kubenswrapper[5028]: I1123 07:42:30.947893 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:42:30 crc kubenswrapper[5028]: I1123 07:42:30.948013 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" gracePeriod=600 Nov 23 07:42:31 crc kubenswrapper[5028]: E1123 07:42:31.079776 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:42:31 crc kubenswrapper[5028]: I1123 07:42:31.394496 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" exitCode=0 Nov 23 07:42:31 crc kubenswrapper[5028]: I1123 07:42:31.394543 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363"} Nov 23 07:42:31 crc kubenswrapper[5028]: I1123 07:42:31.394583 5028 scope.go:117] "RemoveContainer" containerID="b865f7bdf18e540f0e634f75e1e714bd143cad1e3fd21f05d39f3f5b3ca6246f" Nov 23 07:42:31 crc kubenswrapper[5028]: I1123 07:42:31.397269 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:42:31 crc kubenswrapper[5028]: E1123 07:42:31.397762 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:42:45 crc kubenswrapper[5028]: I1123 07:42:45.053199 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:42:45 crc kubenswrapper[5028]: E1123 07:42:45.054095 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:42:59 crc kubenswrapper[5028]: I1123 07:42:59.053607 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:42:59 crc kubenswrapper[5028]: E1123 07:42:59.054337 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:43:13 crc kubenswrapper[5028]: I1123 07:43:13.053870 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:43:13 crc kubenswrapper[5028]: E1123 07:43:13.055018 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:43:25 crc kubenswrapper[5028]: I1123 07:43:25.052879 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:43:25 crc kubenswrapper[5028]: E1123 07:43:25.054325 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:43:36 crc kubenswrapper[5028]: I1123 07:43:36.053227 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:43:36 crc kubenswrapper[5028]: E1123 07:43:36.054056 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:43:49 crc kubenswrapper[5028]: I1123 07:43:49.053668 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:43:49 crc kubenswrapper[5028]: E1123 07:43:49.054609 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:44:00 crc kubenswrapper[5028]: I1123 07:44:00.054257 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:44:00 crc kubenswrapper[5028]: E1123 07:44:00.055385 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:44:15 crc kubenswrapper[5028]: I1123 07:44:15.053338 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:44:15 crc kubenswrapper[5028]: E1123 07:44:15.054060 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:44:30 crc kubenswrapper[5028]: I1123 07:44:30.053631 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:44:30 crc kubenswrapper[5028]: E1123 07:44:30.054474 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:44:45 crc kubenswrapper[5028]: I1123 07:44:45.053779 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:44:45 crc kubenswrapper[5028]: E1123 07:44:45.054859 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:44:57 crc kubenswrapper[5028]: I1123 07:44:57.052637 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:44:57 crc kubenswrapper[5028]: E1123 07:44:57.053383 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.206357 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj"] Nov 23 07:45:00 crc kubenswrapper[5028]: E1123 07:45:00.207090 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="extract-utilities" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.207108 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="extract-utilities" Nov 23 07:45:00 crc kubenswrapper[5028]: E1123 07:45:00.207149 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.207161 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[5028]: E1123 07:45:00.207171 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="extract-content" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.207179 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="extract-content" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.207381 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af68257-b3d0-455f-8350-a08bdcc34139" containerName="registry-server" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.207981 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.212081 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.212502 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.221218 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj"] Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.302059 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f11abaa2-daf4-4fd2-a736-fa77aeae7977-secret-volume\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.302109 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f11abaa2-daf4-4fd2-a736-fa77aeae7977-config-volume\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.302201 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djndx\" (UniqueName: \"kubernetes.io/projected/f11abaa2-daf4-4fd2-a736-fa77aeae7977-kube-api-access-djndx\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.404805 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f11abaa2-daf4-4fd2-a736-fa77aeae7977-secret-volume\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.404851 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f11abaa2-daf4-4fd2-a736-fa77aeae7977-config-volume\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.404909 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djndx\" (UniqueName: \"kubernetes.io/projected/f11abaa2-daf4-4fd2-a736-fa77aeae7977-kube-api-access-djndx\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.406256 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f11abaa2-daf4-4fd2-a736-fa77aeae7977-config-volume\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.419200 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f11abaa2-daf4-4fd2-a736-fa77aeae7977-secret-volume\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.422984 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djndx\" (UniqueName: \"kubernetes.io/projected/f11abaa2-daf4-4fd2-a736-fa77aeae7977-kube-api-access-djndx\") pod \"collect-profiles-29398065-f2kbj\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.526038 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:00 crc kubenswrapper[5028]: I1123 07:45:00.973738 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj"] Nov 23 07:45:00 crc kubenswrapper[5028]: W1123 07:45:00.987697 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11abaa2_daf4_4fd2_a736_fa77aeae7977.slice/crio-140cf94a8c882b6e9173a8552eb63e154aaf027df3ed047986370f021c16c610 WatchSource:0}: Error finding container 140cf94a8c882b6e9173a8552eb63e154aaf027df3ed047986370f021c16c610: Status 404 returned error can't find the container with id 140cf94a8c882b6e9173a8552eb63e154aaf027df3ed047986370f021c16c610 Nov 23 07:45:01 crc kubenswrapper[5028]: I1123 07:45:01.614696 5028 generic.go:334] "Generic (PLEG): container finished" podID="f11abaa2-daf4-4fd2-a736-fa77aeae7977" containerID="d0ef5c5d2057f912261ab15d1d85579091fdae3e615bb5878d560867db1b7c61" exitCode=0 Nov 23 07:45:01 crc kubenswrapper[5028]: I1123 07:45:01.614775 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" event={"ID":"f11abaa2-daf4-4fd2-a736-fa77aeae7977","Type":"ContainerDied","Data":"d0ef5c5d2057f912261ab15d1d85579091fdae3e615bb5878d560867db1b7c61"} Nov 23 07:45:01 crc kubenswrapper[5028]: I1123 07:45:01.615060 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" event={"ID":"f11abaa2-daf4-4fd2-a736-fa77aeae7977","Type":"ContainerStarted","Data":"140cf94a8c882b6e9173a8552eb63e154aaf027df3ed047986370f021c16c610"} Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.940202 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.965069 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f11abaa2-daf4-4fd2-a736-fa77aeae7977-secret-volume\") pod \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.965115 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f11abaa2-daf4-4fd2-a736-fa77aeae7977-config-volume\") pod \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.965142 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djndx\" (UniqueName: \"kubernetes.io/projected/f11abaa2-daf4-4fd2-a736-fa77aeae7977-kube-api-access-djndx\") pod \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\" (UID: \"f11abaa2-daf4-4fd2-a736-fa77aeae7977\") " Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.966005 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f11abaa2-daf4-4fd2-a736-fa77aeae7977-config-volume" (OuterVolumeSpecName: "config-volume") pod "f11abaa2-daf4-4fd2-a736-fa77aeae7977" (UID: "f11abaa2-daf4-4fd2-a736-fa77aeae7977"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.970267 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f11abaa2-daf4-4fd2-a736-fa77aeae7977-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f11abaa2-daf4-4fd2-a736-fa77aeae7977" (UID: "f11abaa2-daf4-4fd2-a736-fa77aeae7977"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 07:45:02 crc kubenswrapper[5028]: I1123 07:45:02.970429 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f11abaa2-daf4-4fd2-a736-fa77aeae7977-kube-api-access-djndx" (OuterVolumeSpecName: "kube-api-access-djndx") pod "f11abaa2-daf4-4fd2-a736-fa77aeae7977" (UID: "f11abaa2-daf4-4fd2-a736-fa77aeae7977"). InnerVolumeSpecName "kube-api-access-djndx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:45:03 crc kubenswrapper[5028]: I1123 07:45:03.067588 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f11abaa2-daf4-4fd2-a736-fa77aeae7977-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[5028]: I1123 07:45:03.067627 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f11abaa2-daf4-4fd2-a736-fa77aeae7977-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[5028]: I1123 07:45:03.067638 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djndx\" (UniqueName: \"kubernetes.io/projected/f11abaa2-daf4-4fd2-a736-fa77aeae7977-kube-api-access-djndx\") on node \"crc\" DevicePath \"\"" Nov 23 07:45:03 crc kubenswrapper[5028]: I1123 07:45:03.631555 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" event={"ID":"f11abaa2-daf4-4fd2-a736-fa77aeae7977","Type":"ContainerDied","Data":"140cf94a8c882b6e9173a8552eb63e154aaf027df3ed047986370f021c16c610"} Nov 23 07:45:03 crc kubenswrapper[5028]: I1123 07:45:03.631915 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="140cf94a8c882b6e9173a8552eb63e154aaf027df3ed047986370f021c16c610" Nov 23 07:45:03 crc kubenswrapper[5028]: I1123 07:45:03.631654 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj" Nov 23 07:45:04 crc kubenswrapper[5028]: I1123 07:45:04.004127 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf"] Nov 23 07:45:04 crc kubenswrapper[5028]: I1123 07:45:04.009630 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398020-bbzgf"] Nov 23 07:45:05 crc kubenswrapper[5028]: I1123 07:45:05.063855 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ef0f81-8b76-4b7c-91f8-edb0791421c9" path="/var/lib/kubelet/pods/54ef0f81-8b76-4b7c-91f8-edb0791421c9/volumes" Nov 23 07:45:09 crc kubenswrapper[5028]: I1123 07:45:09.053815 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:45:09 crc kubenswrapper[5028]: E1123 07:45:09.054293 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:45:23 crc kubenswrapper[5028]: I1123 07:45:23.462938 5028 scope.go:117] "RemoveContainer" containerID="3970b8797d02c8c366db59a03f6b97f46485da40662f21bfb4d83497d0380546" Nov 23 07:45:24 crc kubenswrapper[5028]: I1123 07:45:24.053687 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:45:24 crc kubenswrapper[5028]: E1123 07:45:24.053887 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:45:37 crc kubenswrapper[5028]: I1123 07:45:37.052718 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:45:37 crc kubenswrapper[5028]: E1123 07:45:37.054455 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:45:51 crc kubenswrapper[5028]: I1123 07:45:51.053762 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:45:51 crc kubenswrapper[5028]: E1123 07:45:51.055148 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:46:03 crc kubenswrapper[5028]: I1123 07:46:03.054324 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:46:03 crc kubenswrapper[5028]: E1123 07:46:03.055390 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:46:16 crc kubenswrapper[5028]: I1123 07:46:16.053425 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:46:16 crc kubenswrapper[5028]: E1123 07:46:16.054103 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:46:31 crc kubenswrapper[5028]: I1123 07:46:31.053880 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:46:31 crc kubenswrapper[5028]: E1123 07:46:31.055461 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:46:39 crc kubenswrapper[5028]: I1123 07:46:39.903328 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vsqws"] Nov 23 07:46:39 crc kubenswrapper[5028]: E1123 07:46:39.904136 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f11abaa2-daf4-4fd2-a736-fa77aeae7977" containerName="collect-profiles" Nov 23 07:46:39 crc kubenswrapper[5028]: I1123 07:46:39.904150 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f11abaa2-daf4-4fd2-a736-fa77aeae7977" containerName="collect-profiles" Nov 23 07:46:39 crc kubenswrapper[5028]: I1123 07:46:39.904298 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f11abaa2-daf4-4fd2-a736-fa77aeae7977" containerName="collect-profiles" Nov 23 07:46:39 crc kubenswrapper[5028]: I1123 07:46:39.905239 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:39 crc kubenswrapper[5028]: I1123 07:46:39.944654 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsqws"] Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.061742 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-utilities\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.062309 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4l9\" (UniqueName: \"kubernetes.io/projected/436e6336-3f88-4cbc-a4a4-98248527f8c3-kube-api-access-wp4l9\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.062460 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-catalog-content\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.164275 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-catalog-content\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.164428 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-utilities\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.164479 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp4l9\" (UniqueName: \"kubernetes.io/projected/436e6336-3f88-4cbc-a4a4-98248527f8c3-kube-api-access-wp4l9\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.164849 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-catalog-content\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.164866 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-utilities\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.185923 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp4l9\" (UniqueName: \"kubernetes.io/projected/436e6336-3f88-4cbc-a4a4-98248527f8c3-kube-api-access-wp4l9\") pod \"community-operators-vsqws\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.243677 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:40 crc kubenswrapper[5028]: I1123 07:46:40.719922 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vsqws"] Nov 23 07:46:41 crc kubenswrapper[5028]: I1123 07:46:41.249300 5028 generic.go:334] "Generic (PLEG): container finished" podID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerID="b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2" exitCode=0 Nov 23 07:46:41 crc kubenswrapper[5028]: I1123 07:46:41.249348 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerDied","Data":"b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2"} Nov 23 07:46:41 crc kubenswrapper[5028]: I1123 07:46:41.249377 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerStarted","Data":"9db31cde8e5c5a59b0b31a0062d37248197c1f09cb90ca5706b7124359d6689b"} Nov 23 07:46:41 crc kubenswrapper[5028]: I1123 07:46:41.251311 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:46:42 crc kubenswrapper[5028]: I1123 07:46:42.256472 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerStarted","Data":"30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b"} Nov 23 07:46:43 crc kubenswrapper[5028]: I1123 07:46:43.265147 5028 generic.go:334] "Generic (PLEG): container finished" podID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerID="30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b" exitCode=0 Nov 23 07:46:43 crc kubenswrapper[5028]: I1123 07:46:43.265238 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerDied","Data":"30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b"} Nov 23 07:46:44 crc kubenswrapper[5028]: I1123 07:46:44.274162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerStarted","Data":"3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df"} Nov 23 07:46:45 crc kubenswrapper[5028]: I1123 07:46:45.052644 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:46:45 crc kubenswrapper[5028]: E1123 07:46:45.052896 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:46:50 crc kubenswrapper[5028]: I1123 07:46:50.245017 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:50 crc kubenswrapper[5028]: I1123 07:46:50.245561 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:50 crc kubenswrapper[5028]: I1123 07:46:50.307491 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:50 crc kubenswrapper[5028]: I1123 07:46:50.339118 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vsqws" podStartSLOduration=8.893375865 podStartE2EDuration="11.339095276s" podCreationTimestamp="2025-11-23 07:46:39 +0000 UTC" firstStartedPulling="2025-11-23 07:46:41.2510871 +0000 UTC m=+3384.948491879" lastFinishedPulling="2025-11-23 07:46:43.696806511 +0000 UTC m=+3387.394211290" observedRunningTime="2025-11-23 07:46:44.302272889 +0000 UTC m=+3387.999677668" watchObservedRunningTime="2025-11-23 07:46:50.339095276 +0000 UTC m=+3394.036500065" Nov 23 07:46:50 crc kubenswrapper[5028]: I1123 07:46:50.381521 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:50 crc kubenswrapper[5028]: I1123 07:46:50.553674 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsqws"] Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.339007 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vsqws" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="registry-server" containerID="cri-o://3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df" gracePeriod=2 Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.768135 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.779938 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp4l9\" (UniqueName: \"kubernetes.io/projected/436e6336-3f88-4cbc-a4a4-98248527f8c3-kube-api-access-wp4l9\") pod \"436e6336-3f88-4cbc-a4a4-98248527f8c3\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.780028 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-catalog-content\") pod \"436e6336-3f88-4cbc-a4a4-98248527f8c3\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.780058 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-utilities\") pod \"436e6336-3f88-4cbc-a4a4-98248527f8c3\" (UID: \"436e6336-3f88-4cbc-a4a4-98248527f8c3\") " Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.782270 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-utilities" (OuterVolumeSpecName: "utilities") pod "436e6336-3f88-4cbc-a4a4-98248527f8c3" (UID: "436e6336-3f88-4cbc-a4a4-98248527f8c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.791353 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e6336-3f88-4cbc-a4a4-98248527f8c3-kube-api-access-wp4l9" (OuterVolumeSpecName: "kube-api-access-wp4l9") pod "436e6336-3f88-4cbc-a4a4-98248527f8c3" (UID: "436e6336-3f88-4cbc-a4a4-98248527f8c3"). InnerVolumeSpecName "kube-api-access-wp4l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.881330 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp4l9\" (UniqueName: \"kubernetes.io/projected/436e6336-3f88-4cbc-a4a4-98248527f8c3-kube-api-access-wp4l9\") on node \"crc\" DevicePath \"\"" Nov 23 07:46:52 crc kubenswrapper[5028]: I1123 07:46:52.881778 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.013481 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "436e6336-3f88-4cbc-a4a4-98248527f8c3" (UID: "436e6336-3f88-4cbc-a4a4-98248527f8c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.084378 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/436e6336-3f88-4cbc-a4a4-98248527f8c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.352197 5028 generic.go:334] "Generic (PLEG): container finished" podID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerID="3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df" exitCode=0 Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.352266 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vsqws" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.352280 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerDied","Data":"3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df"} Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.352334 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vsqws" event={"ID":"436e6336-3f88-4cbc-a4a4-98248527f8c3","Type":"ContainerDied","Data":"9db31cde8e5c5a59b0b31a0062d37248197c1f09cb90ca5706b7124359d6689b"} Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.352374 5028 scope.go:117] "RemoveContainer" containerID="3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.379672 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vsqws"] Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.381571 5028 scope.go:117] "RemoveContainer" containerID="30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.389474 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vsqws"] Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.402479 5028 scope.go:117] "RemoveContainer" containerID="b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.443290 5028 scope.go:117] "RemoveContainer" containerID="3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df" Nov 23 07:46:53 crc kubenswrapper[5028]: E1123 07:46:53.443615 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df\": container with ID starting with 3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df not found: ID does not exist" containerID="3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.443734 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df"} err="failed to get container status \"3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df\": rpc error: code = NotFound desc = could not find container \"3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df\": container with ID starting with 3f3306fb4d6578a7ec436ae7a397a357bcd2ddff6050946b5aac87db0818d6df not found: ID does not exist" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.443774 5028 scope.go:117] "RemoveContainer" containerID="30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b" Nov 23 07:46:53 crc kubenswrapper[5028]: E1123 07:46:53.444532 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b\": container with ID starting with 30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b not found: ID does not exist" containerID="30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.444651 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b"} err="failed to get container status \"30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b\": rpc error: code = NotFound desc = could not find container \"30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b\": container with ID starting with 30af00215df74405aa39a95c9bf0368d35d7acd34427c6a0a9e08e1f7e20a89b not found: ID does not exist" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.444780 5028 scope.go:117] "RemoveContainer" containerID="b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2" Nov 23 07:46:53 crc kubenswrapper[5028]: E1123 07:46:53.445277 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2\": container with ID starting with b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2 not found: ID does not exist" containerID="b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2" Nov 23 07:46:53 crc kubenswrapper[5028]: I1123 07:46:53.445315 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2"} err="failed to get container status \"b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2\": rpc error: code = NotFound desc = could not find container \"b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2\": container with ID starting with b4ec7f9e07174d8895cd78965dbfdaa8ac6f212056fe1c03051c19b48ad23df2 not found: ID does not exist" Nov 23 07:46:55 crc kubenswrapper[5028]: I1123 07:46:55.067143 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" path="/var/lib/kubelet/pods/436e6336-3f88-4cbc-a4a4-98248527f8c3/volumes" Nov 23 07:46:59 crc kubenswrapper[5028]: I1123 07:46:59.052569 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:46:59 crc kubenswrapper[5028]: E1123 07:46:59.053463 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.253979 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qjwqq"] Nov 23 07:47:08 crc kubenswrapper[5028]: E1123 07:47:08.256228 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="registry-server" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.256246 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="registry-server" Nov 23 07:47:08 crc kubenswrapper[5028]: E1123 07:47:08.256269 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="extract-content" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.256277 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="extract-content" Nov 23 07:47:08 crc kubenswrapper[5028]: E1123 07:47:08.256304 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="extract-utilities" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.256315 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="extract-utilities" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.256483 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="436e6336-3f88-4cbc-a4a4-98248527f8c3" containerName="registry-server" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.257745 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.272996 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjwqq"] Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.325514 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxcc\" (UniqueName: \"kubernetes.io/projected/ac6c0fd0-589a-49df-82c7-07e95d58febf-kube-api-access-wrxcc\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.325644 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-utilities\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.325782 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-catalog-content\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.427245 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-catalog-content\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.427745 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrxcc\" (UniqueName: \"kubernetes.io/projected/ac6c0fd0-589a-49df-82c7-07e95d58febf-kube-api-access-wrxcc\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.427849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-utilities\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.427994 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-catalog-content\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.428404 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-utilities\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.447479 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrxcc\" (UniqueName: \"kubernetes.io/projected/ac6c0fd0-589a-49df-82c7-07e95d58febf-kube-api-access-wrxcc\") pod \"redhat-operators-qjwqq\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:08 crc kubenswrapper[5028]: I1123 07:47:08.583204 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:09 crc kubenswrapper[5028]: I1123 07:47:09.017299 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjwqq"] Nov 23 07:47:09 crc kubenswrapper[5028]: E1123 07:47:09.350878 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac6c0fd0_589a_49df_82c7_07e95d58febf.slice/crio-conmon-ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:47:09 crc kubenswrapper[5028]: I1123 07:47:09.494415 5028 generic.go:334] "Generic (PLEG): container finished" podID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerID="ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314" exitCode=0 Nov 23 07:47:09 crc kubenswrapper[5028]: I1123 07:47:09.494493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerDied","Data":"ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314"} Nov 23 07:47:09 crc kubenswrapper[5028]: I1123 07:47:09.494809 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerStarted","Data":"46f0ac8ea8e7710fb86c37307f7c07c19efe0c8eff7aa3a3b6e17eeec4cff605"} Nov 23 07:47:10 crc kubenswrapper[5028]: I1123 07:47:10.504392 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerStarted","Data":"1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df"} Nov 23 07:47:11 crc kubenswrapper[5028]: I1123 07:47:11.513663 5028 generic.go:334] "Generic (PLEG): container finished" podID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerID="1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df" exitCode=0 Nov 23 07:47:11 crc kubenswrapper[5028]: I1123 07:47:11.513751 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerDied","Data":"1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df"} Nov 23 07:47:12 crc kubenswrapper[5028]: I1123 07:47:12.526337 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerStarted","Data":"d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3"} Nov 23 07:47:12 crc kubenswrapper[5028]: I1123 07:47:12.555340 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qjwqq" podStartSLOduration=2.1431084240000002 podStartE2EDuration="4.555313018s" podCreationTimestamp="2025-11-23 07:47:08 +0000 UTC" firstStartedPulling="2025-11-23 07:47:09.496583914 +0000 UTC m=+3413.193988693" lastFinishedPulling="2025-11-23 07:47:11.908788508 +0000 UTC m=+3415.606193287" observedRunningTime="2025-11-23 07:47:12.551350402 +0000 UTC m=+3416.248755191" watchObservedRunningTime="2025-11-23 07:47:12.555313018 +0000 UTC m=+3416.252717817" Nov 23 07:47:14 crc kubenswrapper[5028]: I1123 07:47:14.052853 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:47:14 crc kubenswrapper[5028]: E1123 07:47:14.054543 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:47:18 crc kubenswrapper[5028]: I1123 07:47:18.584383 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:18 crc kubenswrapper[5028]: I1123 07:47:18.584845 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:18 crc kubenswrapper[5028]: I1123 07:47:18.629015 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:19 crc kubenswrapper[5028]: I1123 07:47:19.617215 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:19 crc kubenswrapper[5028]: I1123 07:47:19.671004 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjwqq"] Nov 23 07:47:21 crc kubenswrapper[5028]: I1123 07:47:21.601139 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjwqq" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="registry-server" containerID="cri-o://d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3" gracePeriod=2 Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.562493 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.609138 5028 generic.go:334] "Generic (PLEG): container finished" podID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerID="d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3" exitCode=0 Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.609179 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerDied","Data":"d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3"} Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.609216 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjwqq" event={"ID":"ac6c0fd0-589a-49df-82c7-07e95d58febf","Type":"ContainerDied","Data":"46f0ac8ea8e7710fb86c37307f7c07c19efe0c8eff7aa3a3b6e17eeec4cff605"} Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.609236 5028 scope.go:117] "RemoveContainer" containerID="d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.609265 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjwqq" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.631327 5028 scope.go:117] "RemoveContainer" containerID="1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.643575 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-utilities\") pod \"ac6c0fd0-589a-49df-82c7-07e95d58febf\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.643624 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrxcc\" (UniqueName: \"kubernetes.io/projected/ac6c0fd0-589a-49df-82c7-07e95d58febf-kube-api-access-wrxcc\") pod \"ac6c0fd0-589a-49df-82c7-07e95d58febf\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.643698 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-catalog-content\") pod \"ac6c0fd0-589a-49df-82c7-07e95d58febf\" (UID: \"ac6c0fd0-589a-49df-82c7-07e95d58febf\") " Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.644711 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-utilities" (OuterVolumeSpecName: "utilities") pod "ac6c0fd0-589a-49df-82c7-07e95d58febf" (UID: "ac6c0fd0-589a-49df-82c7-07e95d58febf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.649358 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac6c0fd0-589a-49df-82c7-07e95d58febf-kube-api-access-wrxcc" (OuterVolumeSpecName: "kube-api-access-wrxcc") pod "ac6c0fd0-589a-49df-82c7-07e95d58febf" (UID: "ac6c0fd0-589a-49df-82c7-07e95d58febf"). InnerVolumeSpecName "kube-api-access-wrxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.650873 5028 scope.go:117] "RemoveContainer" containerID="ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.698590 5028 scope.go:117] "RemoveContainer" containerID="d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3" Nov 23 07:47:22 crc kubenswrapper[5028]: E1123 07:47:22.698980 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3\": container with ID starting with d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3 not found: ID does not exist" containerID="d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.699028 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3"} err="failed to get container status \"d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3\": rpc error: code = NotFound desc = could not find container \"d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3\": container with ID starting with d88c8de465497c8005cda365419735319a6f80306ea577ce499d16506aa8a9d3 not found: ID does not exist" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.699058 5028 scope.go:117] "RemoveContainer" containerID="1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df" Nov 23 07:47:22 crc kubenswrapper[5028]: E1123 07:47:22.699331 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df\": container with ID starting with 1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df not found: ID does not exist" containerID="1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.699358 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df"} err="failed to get container status \"1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df\": rpc error: code = NotFound desc = could not find container \"1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df\": container with ID starting with 1649572b7faf2a1336ec55a0ef3e8d9458f8a7c8ecae0b7d3b12057438b021df not found: ID does not exist" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.699374 5028 scope.go:117] "RemoveContainer" containerID="ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314" Nov 23 07:47:22 crc kubenswrapper[5028]: E1123 07:47:22.699625 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314\": container with ID starting with ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314 not found: ID does not exist" containerID="ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.699642 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314"} err="failed to get container status \"ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314\": rpc error: code = NotFound desc = could not find container \"ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314\": container with ID starting with ec8d66bcfe033b686743f8535a93af2f8f5fae910951b7a9820ff1d8a3e24314 not found: ID does not exist" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.739161 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac6c0fd0-589a-49df-82c7-07e95d58febf" (UID: "ac6c0fd0-589a-49df-82c7-07e95d58febf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.745168 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.745187 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac6c0fd0-589a-49df-82c7-07e95d58febf-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.745196 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrxcc\" (UniqueName: \"kubernetes.io/projected/ac6c0fd0-589a-49df-82c7-07e95d58febf-kube-api-access-wrxcc\") on node \"crc\" DevicePath \"\"" Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.940251 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjwqq"] Nov 23 07:47:22 crc kubenswrapper[5028]: I1123 07:47:22.945209 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qjwqq"] Nov 23 07:47:23 crc kubenswrapper[5028]: I1123 07:47:23.066438 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" path="/var/lib/kubelet/pods/ac6c0fd0-589a-49df-82c7-07e95d58febf/volumes" Nov 23 07:47:29 crc kubenswrapper[5028]: I1123 07:47:29.054151 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:47:29 crc kubenswrapper[5028]: E1123 07:47:29.055372 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:47:40 crc kubenswrapper[5028]: I1123 07:47:40.053553 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:47:40 crc kubenswrapper[5028]: I1123 07:47:40.769883 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"2cc54bfefe95ae34046b2f945755dfa9ecb65737842e9904f8c0c661b4a95c92"} Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.561503 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6s7kq"] Nov 23 07:49:29 crc kubenswrapper[5028]: E1123 07:49:29.562826 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="registry-server" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.562858 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="registry-server" Nov 23 07:49:29 crc kubenswrapper[5028]: E1123 07:49:29.562904 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="extract-utilities" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.562925 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="extract-utilities" Nov 23 07:49:29 crc kubenswrapper[5028]: E1123 07:49:29.563000 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="extract-content" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.563020 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="extract-content" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.563444 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac6c0fd0-589a-49df-82c7-07e95d58febf" containerName="registry-server" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.566327 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.583092 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s7kq"] Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.710873 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-utilities\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.711083 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw5pj\" (UniqueName: \"kubernetes.io/projected/3b0f4487-75cf-4492-a692-60454cfdbaf5-kube-api-access-gw5pj\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.711298 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-catalog-content\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.812723 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-catalog-content\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.812839 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-utilities\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.812895 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw5pj\" (UniqueName: \"kubernetes.io/projected/3b0f4487-75cf-4492-a692-60454cfdbaf5-kube-api-access-gw5pj\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.813479 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-utilities\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.813485 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-catalog-content\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.838074 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw5pj\" (UniqueName: \"kubernetes.io/projected/3b0f4487-75cf-4492-a692-60454cfdbaf5-kube-api-access-gw5pj\") pod \"redhat-marketplace-6s7kq\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:29 crc kubenswrapper[5028]: I1123 07:49:29.902085 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:30 crc kubenswrapper[5028]: I1123 07:49:30.365738 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s7kq"] Nov 23 07:49:30 crc kubenswrapper[5028]: I1123 07:49:30.731022 5028 generic.go:334] "Generic (PLEG): container finished" podID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerID="e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092" exitCode=0 Nov 23 07:49:30 crc kubenswrapper[5028]: I1123 07:49:30.731088 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s7kq" event={"ID":"3b0f4487-75cf-4492-a692-60454cfdbaf5","Type":"ContainerDied","Data":"e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092"} Nov 23 07:49:30 crc kubenswrapper[5028]: I1123 07:49:30.731386 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s7kq" event={"ID":"3b0f4487-75cf-4492-a692-60454cfdbaf5","Type":"ContainerStarted","Data":"e38a3af72eb1cb25c0fa2e8dd0f52deb42991d23610748e4d28f33b4e4788f54"} Nov 23 07:49:31 crc kubenswrapper[5028]: I1123 07:49:31.744079 5028 generic.go:334] "Generic (PLEG): container finished" podID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerID="b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0" exitCode=0 Nov 23 07:49:31 crc kubenswrapper[5028]: I1123 07:49:31.744218 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s7kq" event={"ID":"3b0f4487-75cf-4492-a692-60454cfdbaf5","Type":"ContainerDied","Data":"b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0"} Nov 23 07:49:32 crc kubenswrapper[5028]: I1123 07:49:32.753060 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s7kq" event={"ID":"3b0f4487-75cf-4492-a692-60454cfdbaf5","Type":"ContainerStarted","Data":"5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905"} Nov 23 07:49:39 crc kubenswrapper[5028]: I1123 07:49:39.902217 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:39 crc kubenswrapper[5028]: I1123 07:49:39.902814 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:39 crc kubenswrapper[5028]: I1123 07:49:39.943379 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:39 crc kubenswrapper[5028]: I1123 07:49:39.965730 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6s7kq" podStartSLOduration=9.572519214 podStartE2EDuration="10.965685614s" podCreationTimestamp="2025-11-23 07:49:29 +0000 UTC" firstStartedPulling="2025-11-23 07:49:30.733377139 +0000 UTC m=+3554.430781918" lastFinishedPulling="2025-11-23 07:49:32.126543539 +0000 UTC m=+3555.823948318" observedRunningTime="2025-11-23 07:49:32.773756848 +0000 UTC m=+3556.471161627" watchObservedRunningTime="2025-11-23 07:49:39.965685614 +0000 UTC m=+3563.663090393" Nov 23 07:49:40 crc kubenswrapper[5028]: I1123 07:49:40.894093 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:40 crc kubenswrapper[5028]: I1123 07:49:40.947568 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s7kq"] Nov 23 07:49:42 crc kubenswrapper[5028]: I1123 07:49:42.842604 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6s7kq" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="registry-server" containerID="cri-o://5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905" gracePeriod=2 Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.262202 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.372852 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-utilities\") pod \"3b0f4487-75cf-4492-a692-60454cfdbaf5\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.372918 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-catalog-content\") pod \"3b0f4487-75cf-4492-a692-60454cfdbaf5\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.373045 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw5pj\" (UniqueName: \"kubernetes.io/projected/3b0f4487-75cf-4492-a692-60454cfdbaf5-kube-api-access-gw5pj\") pod \"3b0f4487-75cf-4492-a692-60454cfdbaf5\" (UID: \"3b0f4487-75cf-4492-a692-60454cfdbaf5\") " Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.374212 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-utilities" (OuterVolumeSpecName: "utilities") pod "3b0f4487-75cf-4492-a692-60454cfdbaf5" (UID: "3b0f4487-75cf-4492-a692-60454cfdbaf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.379680 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b0f4487-75cf-4492-a692-60454cfdbaf5-kube-api-access-gw5pj" (OuterVolumeSpecName: "kube-api-access-gw5pj") pod "3b0f4487-75cf-4492-a692-60454cfdbaf5" (UID: "3b0f4487-75cf-4492-a692-60454cfdbaf5"). InnerVolumeSpecName "kube-api-access-gw5pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.391737 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b0f4487-75cf-4492-a692-60454cfdbaf5" (UID: "3b0f4487-75cf-4492-a692-60454cfdbaf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.475288 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw5pj\" (UniqueName: \"kubernetes.io/projected/3b0f4487-75cf-4492-a692-60454cfdbaf5-kube-api-access-gw5pj\") on node \"crc\" DevicePath \"\"" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.475360 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.475374 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b0f4487-75cf-4492-a692-60454cfdbaf5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.850893 5028 generic.go:334] "Generic (PLEG): container finished" podID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerID="5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905" exitCode=0 Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.850988 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s7kq" event={"ID":"3b0f4487-75cf-4492-a692-60454cfdbaf5","Type":"ContainerDied","Data":"5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905"} Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.851321 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s7kq" event={"ID":"3b0f4487-75cf-4492-a692-60454cfdbaf5","Type":"ContainerDied","Data":"e38a3af72eb1cb25c0fa2e8dd0f52deb42991d23610748e4d28f33b4e4788f54"} Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.851344 5028 scope.go:117] "RemoveContainer" containerID="5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.850996 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s7kq" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.869068 5028 scope.go:117] "RemoveContainer" containerID="b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.881045 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s7kq"] Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.893177 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s7kq"] Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.910896 5028 scope.go:117] "RemoveContainer" containerID="e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.957035 5028 scope.go:117] "RemoveContainer" containerID="5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905" Nov 23 07:49:43 crc kubenswrapper[5028]: E1123 07:49:43.959152 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905\": container with ID starting with 5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905 not found: ID does not exist" containerID="5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.959217 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905"} err="failed to get container status \"5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905\": rpc error: code = NotFound desc = could not find container \"5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905\": container with ID starting with 5d3703c81b7febb83258fe8e0388d54650093298d759ebe8515df0c6e3b7e905 not found: ID does not exist" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.959246 5028 scope.go:117] "RemoveContainer" containerID="b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0" Nov 23 07:49:43 crc kubenswrapper[5028]: E1123 07:49:43.959532 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0\": container with ID starting with b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0 not found: ID does not exist" containerID="b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.959562 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0"} err="failed to get container status \"b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0\": rpc error: code = NotFound desc = could not find container \"b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0\": container with ID starting with b1758aab5641391e0a6fe3ac4b85680123b2cddfe38d2900a6dfe8657ab5f9c0 not found: ID does not exist" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.959581 5028 scope.go:117] "RemoveContainer" containerID="e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092" Nov 23 07:49:43 crc kubenswrapper[5028]: E1123 07:49:43.959768 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092\": container with ID starting with e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092 not found: ID does not exist" containerID="e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092" Nov 23 07:49:43 crc kubenswrapper[5028]: I1123 07:49:43.959791 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092"} err="failed to get container status \"e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092\": rpc error: code = NotFound desc = could not find container \"e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092\": container with ID starting with e87b8c60d2b2c419106e53ff00e998172767c6f02c7f7bbd0837353986ca3092 not found: ID does not exist" Nov 23 07:49:45 crc kubenswrapper[5028]: I1123 07:49:45.064688 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" path="/var/lib/kubelet/pods/3b0f4487-75cf-4492-a692-60454cfdbaf5/volumes" Nov 23 07:50:00 crc kubenswrapper[5028]: I1123 07:50:00.946865 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:50:00 crc kubenswrapper[5028]: I1123 07:50:00.947996 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:50:30 crc kubenswrapper[5028]: I1123 07:50:30.946382 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:50:30 crc kubenswrapper[5028]: I1123 07:50:30.947002 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:51:00 crc kubenswrapper[5028]: I1123 07:51:00.946496 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:51:00 crc kubenswrapper[5028]: I1123 07:51:00.947056 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:51:00 crc kubenswrapper[5028]: I1123 07:51:00.947103 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:51:00 crc kubenswrapper[5028]: I1123 07:51:00.947710 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cc54bfefe95ae34046b2f945755dfa9ecb65737842e9904f8c0c661b4a95c92"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:51:00 crc kubenswrapper[5028]: I1123 07:51:00.947768 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://2cc54bfefe95ae34046b2f945755dfa9ecb65737842e9904f8c0c661b4a95c92" gracePeriod=600 Nov 23 07:51:01 crc kubenswrapper[5028]: I1123 07:51:01.534845 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="2cc54bfefe95ae34046b2f945755dfa9ecb65737842e9904f8c0c661b4a95c92" exitCode=0 Nov 23 07:51:01 crc kubenswrapper[5028]: I1123 07:51:01.534962 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"2cc54bfefe95ae34046b2f945755dfa9ecb65737842e9904f8c0c661b4a95c92"} Nov 23 07:51:01 crc kubenswrapper[5028]: I1123 07:51:01.535170 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc"} Nov 23 07:51:01 crc kubenswrapper[5028]: I1123 07:51:01.535192 5028 scope.go:117] "RemoveContainer" containerID="426105706f9ad9524ada0f99e8f8c178257d024011e6d678fd66de8edc918363" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.075907 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g7kpb"] Nov 23 07:51:53 crc kubenswrapper[5028]: E1123 07:51:53.076750 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="extract-content" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.076763 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="extract-content" Nov 23 07:51:53 crc kubenswrapper[5028]: E1123 07:51:53.076787 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="registry-server" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.076793 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="registry-server" Nov 23 07:51:53 crc kubenswrapper[5028]: E1123 07:51:53.076805 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="extract-utilities" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.076811 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="extract-utilities" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.076945 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b0f4487-75cf-4492-a692-60454cfdbaf5" containerName="registry-server" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.077848 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.093575 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7kpb"] Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.261683 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm9b\" (UniqueName: \"kubernetes.io/projected/5204ab0e-62f6-470e-9b1e-da0be283b4d9-kube-api-access-9lm9b\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.261804 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-utilities\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.261943 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-catalog-content\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.362916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lm9b\" (UniqueName: \"kubernetes.io/projected/5204ab0e-62f6-470e-9b1e-da0be283b4d9-kube-api-access-9lm9b\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.363024 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-utilities\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.363132 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-catalog-content\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.363594 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-utilities\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.363681 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-catalog-content\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.384155 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lm9b\" (UniqueName: \"kubernetes.io/projected/5204ab0e-62f6-470e-9b1e-da0be283b4d9-kube-api-access-9lm9b\") pod \"certified-operators-g7kpb\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.395270 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.640801 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7kpb"] Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.957160 5028 generic.go:334] "Generic (PLEG): container finished" podID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerID="37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29" exitCode=0 Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.957215 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7kpb" event={"ID":"5204ab0e-62f6-470e-9b1e-da0be283b4d9","Type":"ContainerDied","Data":"37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29"} Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.957249 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7kpb" event={"ID":"5204ab0e-62f6-470e-9b1e-da0be283b4d9","Type":"ContainerStarted","Data":"ec1c9310a4fd47e77ae4994836f32efc38f7b555ba16893119da804277421add"} Nov 23 07:51:53 crc kubenswrapper[5028]: I1123 07:51:53.958742 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:51:54 crc kubenswrapper[5028]: E1123 07:51:54.948668 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5204ab0e_62f6_470e_9b1e_da0be283b4d9.slice/crio-conmon-6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5.scope\": RecentStats: unable to find data in memory cache]" Nov 23 07:51:54 crc kubenswrapper[5028]: I1123 07:51:54.966587 5028 generic.go:334] "Generic (PLEG): container finished" podID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerID="6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5" exitCode=0 Nov 23 07:51:54 crc kubenswrapper[5028]: I1123 07:51:54.966632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7kpb" event={"ID":"5204ab0e-62f6-470e-9b1e-da0be283b4d9","Type":"ContainerDied","Data":"6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5"} Nov 23 07:51:55 crc kubenswrapper[5028]: I1123 07:51:55.975161 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7kpb" event={"ID":"5204ab0e-62f6-470e-9b1e-da0be283b4d9","Type":"ContainerStarted","Data":"25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc"} Nov 23 07:51:55 crc kubenswrapper[5028]: I1123 07:51:55.996241 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g7kpb" podStartSLOduration=1.627226909 podStartE2EDuration="2.996222648s" podCreationTimestamp="2025-11-23 07:51:53 +0000 UTC" firstStartedPulling="2025-11-23 07:51:53.958471723 +0000 UTC m=+3697.655876502" lastFinishedPulling="2025-11-23 07:51:55.327467462 +0000 UTC m=+3699.024872241" observedRunningTime="2025-11-23 07:51:55.99345579 +0000 UTC m=+3699.690860579" watchObservedRunningTime="2025-11-23 07:51:55.996222648 +0000 UTC m=+3699.693627427" Nov 23 07:52:03 crc kubenswrapper[5028]: I1123 07:52:03.396236 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:52:03 crc kubenswrapper[5028]: I1123 07:52:03.396880 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:52:03 crc kubenswrapper[5028]: I1123 07:52:03.473990 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:52:04 crc kubenswrapper[5028]: I1123 07:52:04.084378 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:52:04 crc kubenswrapper[5028]: I1123 07:52:04.134018 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7kpb"] Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.060376 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g7kpb" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="registry-server" containerID="cri-o://25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc" gracePeriod=2 Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.551932 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.662137 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-utilities\") pod \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.662272 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-catalog-content\") pod \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.662315 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lm9b\" (UniqueName: \"kubernetes.io/projected/5204ab0e-62f6-470e-9b1e-da0be283b4d9-kube-api-access-9lm9b\") pod \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\" (UID: \"5204ab0e-62f6-470e-9b1e-da0be283b4d9\") " Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.663584 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-utilities" (OuterVolumeSpecName: "utilities") pod "5204ab0e-62f6-470e-9b1e-da0be283b4d9" (UID: "5204ab0e-62f6-470e-9b1e-da0be283b4d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.666818 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5204ab0e-62f6-470e-9b1e-da0be283b4d9-kube-api-access-9lm9b" (OuterVolumeSpecName: "kube-api-access-9lm9b") pod "5204ab0e-62f6-470e-9b1e-da0be283b4d9" (UID: "5204ab0e-62f6-470e-9b1e-da0be283b4d9"). InnerVolumeSpecName "kube-api-access-9lm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.707216 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5204ab0e-62f6-470e-9b1e-da0be283b4d9" (UID: "5204ab0e-62f6-470e-9b1e-da0be283b4d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.764305 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.764336 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lm9b\" (UniqueName: \"kubernetes.io/projected/5204ab0e-62f6-470e-9b1e-da0be283b4d9-kube-api-access-9lm9b\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:06 crc kubenswrapper[5028]: I1123 07:52:06.764348 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5204ab0e-62f6-470e-9b1e-da0be283b4d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.069623 5028 generic.go:334] "Generic (PLEG): container finished" podID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerID="25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc" exitCode=0 Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.069672 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7kpb" event={"ID":"5204ab0e-62f6-470e-9b1e-da0be283b4d9","Type":"ContainerDied","Data":"25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc"} Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.069708 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7kpb" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.069732 5028 scope.go:117] "RemoveContainer" containerID="25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.069719 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7kpb" event={"ID":"5204ab0e-62f6-470e-9b1e-da0be283b4d9","Type":"ContainerDied","Data":"ec1c9310a4fd47e77ae4994836f32efc38f7b555ba16893119da804277421add"} Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.103381 5028 scope.go:117] "RemoveContainer" containerID="6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.113050 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7kpb"] Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.122520 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g7kpb"] Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.135456 5028 scope.go:117] "RemoveContainer" containerID="37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.164235 5028 scope.go:117] "RemoveContainer" containerID="25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc" Nov 23 07:52:07 crc kubenswrapper[5028]: E1123 07:52:07.164865 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc\": container with ID starting with 25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc not found: ID does not exist" containerID="25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.165072 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc"} err="failed to get container status \"25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc\": rpc error: code = NotFound desc = could not find container \"25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc\": container with ID starting with 25ac394262a57620e1e6559bffd43aace7f9e922bd20ba18a97ef8652c5871fc not found: ID does not exist" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.165191 5028 scope.go:117] "RemoveContainer" containerID="6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5" Nov 23 07:52:07 crc kubenswrapper[5028]: E1123 07:52:07.165669 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5\": container with ID starting with 6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5 not found: ID does not exist" containerID="6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.165827 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5"} err="failed to get container status \"6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5\": rpc error: code = NotFound desc = could not find container \"6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5\": container with ID starting with 6c857cd510f22ed6e923d9bab37b8756dd7ee5bc1ff989e8da3028bdcb4475b5 not found: ID does not exist" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.165868 5028 scope.go:117] "RemoveContainer" containerID="37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29" Nov 23 07:52:07 crc kubenswrapper[5028]: E1123 07:52:07.166416 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29\": container with ID starting with 37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29 not found: ID does not exist" containerID="37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29" Nov 23 07:52:07 crc kubenswrapper[5028]: I1123 07:52:07.166463 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29"} err="failed to get container status \"37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29\": rpc error: code = NotFound desc = could not find container \"37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29\": container with ID starting with 37e70e734dc40a9b079b4609eaad44a688f3470af89302a1e34c61dba0e1ce29 not found: ID does not exist" Nov 23 07:52:09 crc kubenswrapper[5028]: I1123 07:52:09.063938 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" path="/var/lib/kubelet/pods/5204ab0e-62f6-470e-9b1e-da0be283b4d9/volumes" Nov 23 07:53:30 crc kubenswrapper[5028]: I1123 07:53:30.947003 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:53:30 crc kubenswrapper[5028]: I1123 07:53:30.948064 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:54:00 crc kubenswrapper[5028]: I1123 07:54:00.946455 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:54:00 crc kubenswrapper[5028]: I1123 07:54:00.947730 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:54:30 crc kubenswrapper[5028]: I1123 07:54:30.947053 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 07:54:30 crc kubenswrapper[5028]: I1123 07:54:30.947726 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 07:54:30 crc kubenswrapper[5028]: I1123 07:54:30.947790 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 07:54:30 crc kubenswrapper[5028]: I1123 07:54:30.948622 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 07:54:30 crc kubenswrapper[5028]: I1123 07:54:30.948709 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" gracePeriod=600 Nov 23 07:54:31 crc kubenswrapper[5028]: E1123 07:54:31.077878 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:54:31 crc kubenswrapper[5028]: I1123 07:54:31.208353 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" exitCode=0 Nov 23 07:54:31 crc kubenswrapper[5028]: I1123 07:54:31.208419 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc"} Nov 23 07:54:31 crc kubenswrapper[5028]: I1123 07:54:31.208469 5028 scope.go:117] "RemoveContainer" containerID="2cc54bfefe95ae34046b2f945755dfa9ecb65737842e9904f8c0c661b4a95c92" Nov 23 07:54:31 crc kubenswrapper[5028]: I1123 07:54:31.209257 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:54:31 crc kubenswrapper[5028]: E1123 07:54:31.209663 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:54:45 crc kubenswrapper[5028]: I1123 07:54:45.052759 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:54:45 crc kubenswrapper[5028]: E1123 07:54:45.053619 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:55:00 crc kubenswrapper[5028]: I1123 07:55:00.053624 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:55:00 crc kubenswrapper[5028]: E1123 07:55:00.054759 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:55:11 crc kubenswrapper[5028]: I1123 07:55:11.053322 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:55:11 crc kubenswrapper[5028]: E1123 07:55:11.054106 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:55:24 crc kubenswrapper[5028]: I1123 07:55:24.052483 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:55:24 crc kubenswrapper[5028]: E1123 07:55:24.053280 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:55:38 crc kubenswrapper[5028]: I1123 07:55:38.052662 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:55:38 crc kubenswrapper[5028]: E1123 07:55:38.053438 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:55:51 crc kubenswrapper[5028]: I1123 07:55:51.053505 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:55:51 crc kubenswrapper[5028]: E1123 07:55:51.054704 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:56:05 crc kubenswrapper[5028]: I1123 07:56:05.053944 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:56:05 crc kubenswrapper[5028]: E1123 07:56:05.055236 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:56:16 crc kubenswrapper[5028]: I1123 07:56:16.054190 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:56:16 crc kubenswrapper[5028]: E1123 07:56:16.055379 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:56:27 crc kubenswrapper[5028]: I1123 07:56:27.057904 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:56:27 crc kubenswrapper[5028]: E1123 07:56:27.058758 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:56:39 crc kubenswrapper[5028]: I1123 07:56:39.052920 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:56:39 crc kubenswrapper[5028]: E1123 07:56:39.053720 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:56:54 crc kubenswrapper[5028]: I1123 07:56:54.052914 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:56:54 crc kubenswrapper[5028]: E1123 07:56:54.053679 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.850309 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q8p88"] Nov 23 07:56:57 crc kubenswrapper[5028]: E1123 07:56:57.851234 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="extract-utilities" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.851253 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="extract-utilities" Nov 23 07:56:57 crc kubenswrapper[5028]: E1123 07:56:57.851273 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="registry-server" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.851282 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="registry-server" Nov 23 07:56:57 crc kubenswrapper[5028]: E1123 07:56:57.851301 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="extract-content" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.851310 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="extract-content" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.851497 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5204ab0e-62f6-470e-9b1e-da0be283b4d9" containerName="registry-server" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.855662 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.863934 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8p88"] Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.969259 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-utilities\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.969581 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4hdw\" (UniqueName: \"kubernetes.io/projected/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-kube-api-access-c4hdw\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:57 crc kubenswrapper[5028]: I1123 07:56:57.969610 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-catalog-content\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.070735 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-utilities\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.070895 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4hdw\" (UniqueName: \"kubernetes.io/projected/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-kube-api-access-c4hdw\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.071074 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-catalog-content\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.071551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-catalog-content\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.071777 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-utilities\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.100854 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4hdw\" (UniqueName: \"kubernetes.io/projected/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-kube-api-access-c4hdw\") pod \"community-operators-q8p88\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.186192 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:56:58 crc kubenswrapper[5028]: I1123 07:56:58.656493 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q8p88"] Nov 23 07:56:59 crc kubenswrapper[5028]: I1123 07:56:59.384733 5028 generic.go:334] "Generic (PLEG): container finished" podID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerID="b025f537517d54b13ea5c0040e75f923df405550f85c4fb6027112be56bec516" exitCode=0 Nov 23 07:56:59 crc kubenswrapper[5028]: I1123 07:56:59.385093 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerDied","Data":"b025f537517d54b13ea5c0040e75f923df405550f85c4fb6027112be56bec516"} Nov 23 07:56:59 crc kubenswrapper[5028]: I1123 07:56:59.385262 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerStarted","Data":"c7d98ebf95e1eea32d2a828f43f834eb788c05de2745bf4f5c8b068226c68c50"} Nov 23 07:56:59 crc kubenswrapper[5028]: I1123 07:56:59.387579 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 07:57:00 crc kubenswrapper[5028]: I1123 07:57:00.397621 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerStarted","Data":"824d94859182652f399d5b87ca633bde242941df9b7768454ae049a9a6097bb5"} Nov 23 07:57:01 crc kubenswrapper[5028]: I1123 07:57:01.405638 5028 generic.go:334] "Generic (PLEG): container finished" podID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerID="824d94859182652f399d5b87ca633bde242941df9b7768454ae049a9a6097bb5" exitCode=0 Nov 23 07:57:01 crc kubenswrapper[5028]: I1123 07:57:01.406012 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerDied","Data":"824d94859182652f399d5b87ca633bde242941df9b7768454ae049a9a6097bb5"} Nov 23 07:57:02 crc kubenswrapper[5028]: I1123 07:57:02.417832 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerStarted","Data":"8a0cd585145ffb9210444eeca5696c96654d87d3db488702a51b7852c8979eec"} Nov 23 07:57:02 crc kubenswrapper[5028]: I1123 07:57:02.438657 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q8p88" podStartSLOduration=2.971481131 podStartE2EDuration="5.438636603s" podCreationTimestamp="2025-11-23 07:56:57 +0000 UTC" firstStartedPulling="2025-11-23 07:56:59.387336077 +0000 UTC m=+4003.084740856" lastFinishedPulling="2025-11-23 07:57:01.854491539 +0000 UTC m=+4005.551896328" observedRunningTime="2025-11-23 07:57:02.438623543 +0000 UTC m=+4006.136028332" watchObservedRunningTime="2025-11-23 07:57:02.438636603 +0000 UTC m=+4006.136041382" Nov 23 07:57:07 crc kubenswrapper[5028]: I1123 07:57:07.058611 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:57:07 crc kubenswrapper[5028]: E1123 07:57:07.060070 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:57:08 crc kubenswrapper[5028]: I1123 07:57:08.186519 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:57:08 crc kubenswrapper[5028]: I1123 07:57:08.186590 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:57:08 crc kubenswrapper[5028]: I1123 07:57:08.234168 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:57:08 crc kubenswrapper[5028]: I1123 07:57:08.533867 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:57:09 crc kubenswrapper[5028]: I1123 07:57:09.885612 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7z68x"] Nov 23 07:57:09 crc kubenswrapper[5028]: I1123 07:57:09.887784 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:09 crc kubenswrapper[5028]: I1123 07:57:09.905798 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z68x"] Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.075686 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-catalog-content\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.076393 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76tdf\" (UniqueName: \"kubernetes.io/projected/4aff68f2-958a-4cc1-852d-073087f5e7f2-kube-api-access-76tdf\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.076661 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-utilities\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.178431 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-catalog-content\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.178763 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76tdf\" (UniqueName: \"kubernetes.io/projected/4aff68f2-958a-4cc1-852d-073087f5e7f2-kube-api-access-76tdf\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.178880 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-utilities\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.179059 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-catalog-content\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.179381 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-utilities\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.196695 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76tdf\" (UniqueName: \"kubernetes.io/projected/4aff68f2-958a-4cc1-852d-073087f5e7f2-kube-api-access-76tdf\") pod \"redhat-operators-7z68x\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.213219 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.463277 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7z68x"] Nov 23 07:57:10 crc kubenswrapper[5028]: I1123 07:57:10.494174 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerStarted","Data":"19a042390bfa0394dd266c1ed52ce80105d71606f9c78de2438684b374b2502f"} Nov 23 07:57:11 crc kubenswrapper[5028]: I1123 07:57:11.501743 5028 generic.go:334] "Generic (PLEG): container finished" podID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerID="b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e" exitCode=0 Nov 23 07:57:11 crc kubenswrapper[5028]: I1123 07:57:11.501823 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerDied","Data":"b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e"} Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.274701 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8p88"] Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.275364 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q8p88" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="registry-server" containerID="cri-o://8a0cd585145ffb9210444eeca5696c96654d87d3db488702a51b7852c8979eec" gracePeriod=2 Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.512480 5028 generic.go:334] "Generic (PLEG): container finished" podID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerID="8a0cd585145ffb9210444eeca5696c96654d87d3db488702a51b7852c8979eec" exitCode=0 Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.512539 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerDied","Data":"8a0cd585145ffb9210444eeca5696c96654d87d3db488702a51b7852c8979eec"} Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.514471 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerStarted","Data":"b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4"} Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.689464 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.743794 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-utilities\") pod \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.743831 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4hdw\" (UniqueName: \"kubernetes.io/projected/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-kube-api-access-c4hdw\") pod \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.743853 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-catalog-content\") pod \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\" (UID: \"68c1c807-e6f5-4a09-b6e6-1aca4a19d880\") " Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.744751 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-utilities" (OuterVolumeSpecName: "utilities") pod "68c1c807-e6f5-4a09-b6e6-1aca4a19d880" (UID: "68c1c807-e6f5-4a09-b6e6-1aca4a19d880"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.755327 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-kube-api-access-c4hdw" (OuterVolumeSpecName: "kube-api-access-c4hdw") pod "68c1c807-e6f5-4a09-b6e6-1aca4a19d880" (UID: "68c1c807-e6f5-4a09-b6e6-1aca4a19d880"). InnerVolumeSpecName "kube-api-access-c4hdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.801281 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68c1c807-e6f5-4a09-b6e6-1aca4a19d880" (UID: "68c1c807-e6f5-4a09-b6e6-1aca4a19d880"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.845717 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.845761 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4hdw\" (UniqueName: \"kubernetes.io/projected/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-kube-api-access-c4hdw\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:12 crc kubenswrapper[5028]: I1123 07:57:12.845771 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68c1c807-e6f5-4a09-b6e6-1aca4a19d880-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.522777 5028 generic.go:334] "Generic (PLEG): container finished" podID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerID="b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4" exitCode=0 Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.522869 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerDied","Data":"b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4"} Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.526483 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q8p88" Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.526451 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q8p88" event={"ID":"68c1c807-e6f5-4a09-b6e6-1aca4a19d880","Type":"ContainerDied","Data":"c7d98ebf95e1eea32d2a828f43f834eb788c05de2745bf4f5c8b068226c68c50"} Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.526929 5028 scope.go:117] "RemoveContainer" containerID="8a0cd585145ffb9210444eeca5696c96654d87d3db488702a51b7852c8979eec" Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.563420 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q8p88"] Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.564834 5028 scope.go:117] "RemoveContainer" containerID="824d94859182652f399d5b87ca633bde242941df9b7768454ae049a9a6097bb5" Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.569298 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q8p88"] Nov 23 07:57:13 crc kubenswrapper[5028]: I1123 07:57:13.595514 5028 scope.go:117] "RemoveContainer" containerID="b025f537517d54b13ea5c0040e75f923df405550f85c4fb6027112be56bec516" Nov 23 07:57:14 crc kubenswrapper[5028]: I1123 07:57:14.554570 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerStarted","Data":"a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929"} Nov 23 07:57:14 crc kubenswrapper[5028]: I1123 07:57:14.578381 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7z68x" podStartSLOduration=3.110748657 podStartE2EDuration="5.578365069s" podCreationTimestamp="2025-11-23 07:57:09 +0000 UTC" firstStartedPulling="2025-11-23 07:57:11.502981925 +0000 UTC m=+4015.200386714" lastFinishedPulling="2025-11-23 07:57:13.970598337 +0000 UTC m=+4017.668003126" observedRunningTime="2025-11-23 07:57:14.574873324 +0000 UTC m=+4018.272278103" watchObservedRunningTime="2025-11-23 07:57:14.578365069 +0000 UTC m=+4018.275769848" Nov 23 07:57:15 crc kubenswrapper[5028]: I1123 07:57:15.063942 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" path="/var/lib/kubelet/pods/68c1c807-e6f5-4a09-b6e6-1aca4a19d880/volumes" Nov 23 07:57:19 crc kubenswrapper[5028]: I1123 07:57:19.052618 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:57:19 crc kubenswrapper[5028]: E1123 07:57:19.053045 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:57:20 crc kubenswrapper[5028]: I1123 07:57:20.214150 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:20 crc kubenswrapper[5028]: I1123 07:57:20.214500 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:20 crc kubenswrapper[5028]: I1123 07:57:20.271376 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:20 crc kubenswrapper[5028]: I1123 07:57:20.653398 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:20 crc kubenswrapper[5028]: I1123 07:57:20.697936 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z68x"] Nov 23 07:57:22 crc kubenswrapper[5028]: I1123 07:57:22.610244 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7z68x" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="registry-server" containerID="cri-o://a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929" gracePeriod=2 Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.393662 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.519535 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-utilities\") pod \"4aff68f2-958a-4cc1-852d-073087f5e7f2\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.519612 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76tdf\" (UniqueName: \"kubernetes.io/projected/4aff68f2-958a-4cc1-852d-073087f5e7f2-kube-api-access-76tdf\") pod \"4aff68f2-958a-4cc1-852d-073087f5e7f2\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.519969 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-catalog-content\") pod \"4aff68f2-958a-4cc1-852d-073087f5e7f2\" (UID: \"4aff68f2-958a-4cc1-852d-073087f5e7f2\") " Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.520741 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-utilities" (OuterVolumeSpecName: "utilities") pod "4aff68f2-958a-4cc1-852d-073087f5e7f2" (UID: "4aff68f2-958a-4cc1-852d-073087f5e7f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.525691 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aff68f2-958a-4cc1-852d-073087f5e7f2-kube-api-access-76tdf" (OuterVolumeSpecName: "kube-api-access-76tdf") pod "4aff68f2-958a-4cc1-852d-073087f5e7f2" (UID: "4aff68f2-958a-4cc1-852d-073087f5e7f2"). InnerVolumeSpecName "kube-api-access-76tdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.621119 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.621158 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76tdf\" (UniqueName: \"kubernetes.io/projected/4aff68f2-958a-4cc1-852d-073087f5e7f2-kube-api-access-76tdf\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.627074 5028 generic.go:334] "Generic (PLEG): container finished" podID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerID="a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929" exitCode=0 Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.627112 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerDied","Data":"a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929"} Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.627134 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7z68x" event={"ID":"4aff68f2-958a-4cc1-852d-073087f5e7f2","Type":"ContainerDied","Data":"19a042390bfa0394dd266c1ed52ce80105d71606f9c78de2438684b374b2502f"} Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.627151 5028 scope.go:117] "RemoveContainer" containerID="a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.628096 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7z68x" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.632230 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aff68f2-958a-4cc1-852d-073087f5e7f2" (UID: "4aff68f2-958a-4cc1-852d-073087f5e7f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.662418 5028 scope.go:117] "RemoveContainer" containerID="b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.684871 5028 scope.go:117] "RemoveContainer" containerID="b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.707111 5028 scope.go:117] "RemoveContainer" containerID="a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929" Nov 23 07:57:24 crc kubenswrapper[5028]: E1123 07:57:24.707504 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929\": container with ID starting with a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929 not found: ID does not exist" containerID="a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.707555 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929"} err="failed to get container status \"a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929\": rpc error: code = NotFound desc = could not find container \"a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929\": container with ID starting with a03ea527e7f0b11cfed92247571e14538dace6b605c4a89247c5cf349af34929 not found: ID does not exist" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.707584 5028 scope.go:117] "RemoveContainer" containerID="b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4" Nov 23 07:57:24 crc kubenswrapper[5028]: E1123 07:57:24.707934 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4\": container with ID starting with b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4 not found: ID does not exist" containerID="b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.708038 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4"} err="failed to get container status \"b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4\": rpc error: code = NotFound desc = could not find container \"b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4\": container with ID starting with b36f54edb2054e6d134ee3aefb49ce6eec51c8d48ed6d72ba5ecb13b83fbaeb4 not found: ID does not exist" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.708069 5028 scope.go:117] "RemoveContainer" containerID="b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e" Nov 23 07:57:24 crc kubenswrapper[5028]: E1123 07:57:24.708385 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e\": container with ID starting with b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e not found: ID does not exist" containerID="b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.708422 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e"} err="failed to get container status \"b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e\": rpc error: code = NotFound desc = could not find container \"b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e\": container with ID starting with b41af20232066f08a8f20f23288471386e4d8db407f74c631a2b7072f0079e2e not found: ID does not exist" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.722365 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aff68f2-958a-4cc1-852d-073087f5e7f2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.958727 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7z68x"] Nov 23 07:57:24 crc kubenswrapper[5028]: I1123 07:57:24.965976 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7z68x"] Nov 23 07:57:25 crc kubenswrapper[5028]: I1123 07:57:25.060568 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" path="/var/lib/kubelet/pods/4aff68f2-958a-4cc1-852d-073087f5e7f2/volumes" Nov 23 07:57:30 crc kubenswrapper[5028]: I1123 07:57:30.053818 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:57:30 crc kubenswrapper[5028]: E1123 07:57:30.054694 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:57:41 crc kubenswrapper[5028]: I1123 07:57:41.052773 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:57:41 crc kubenswrapper[5028]: E1123 07:57:41.053703 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:57:52 crc kubenswrapper[5028]: I1123 07:57:52.053988 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:57:52 crc kubenswrapper[5028]: E1123 07:57:52.054990 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:58:05 crc kubenswrapper[5028]: I1123 07:58:05.053539 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:58:05 crc kubenswrapper[5028]: E1123 07:58:05.054350 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:58:20 crc kubenswrapper[5028]: I1123 07:58:20.052544 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:58:20 crc kubenswrapper[5028]: E1123 07:58:20.053418 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:58:32 crc kubenswrapper[5028]: I1123 07:58:32.053534 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:58:32 crc kubenswrapper[5028]: E1123 07:58:32.054236 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:58:45 crc kubenswrapper[5028]: I1123 07:58:45.053165 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:58:45 crc kubenswrapper[5028]: E1123 07:58:45.054089 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:58:58 crc kubenswrapper[5028]: I1123 07:58:58.053195 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:58:58 crc kubenswrapper[5028]: E1123 07:58:58.053725 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:59:10 crc kubenswrapper[5028]: I1123 07:59:10.054215 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:59:10 crc kubenswrapper[5028]: E1123 07:59:10.055606 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:59:25 crc kubenswrapper[5028]: I1123 07:59:25.053237 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:59:25 crc kubenswrapper[5028]: E1123 07:59:25.053927 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 07:59:36 crc kubenswrapper[5028]: I1123 07:59:36.052720 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 07:59:36 crc kubenswrapper[5028]: I1123 07:59:36.806047 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"c852ecc9c696605a1433a305a9de01b89ea896af873c5abbc93689d6fe678571"} Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.139538 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf"] Nov 23 08:00:00 crc kubenswrapper[5028]: E1123 08:00:00.140412 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="extract-content" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140426 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="extract-content" Nov 23 08:00:00 crc kubenswrapper[5028]: E1123 08:00:00.140439 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="extract-content" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140446 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="extract-content" Nov 23 08:00:00 crc kubenswrapper[5028]: E1123 08:00:00.140460 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="registry-server" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140467 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="registry-server" Nov 23 08:00:00 crc kubenswrapper[5028]: E1123 08:00:00.140481 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="registry-server" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140488 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="registry-server" Nov 23 08:00:00 crc kubenswrapper[5028]: E1123 08:00:00.140500 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="extract-utilities" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140507 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="extract-utilities" Nov 23 08:00:00 crc kubenswrapper[5028]: E1123 08:00:00.140534 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="extract-utilities" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140540 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="extract-utilities" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140700 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aff68f2-958a-4cc1-852d-073087f5e7f2" containerName="registry-server" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.140712 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c1c807-e6f5-4a09-b6e6-1aca4a19d880" containerName="registry-server" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.141235 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.143395 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.145905 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.150720 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf"] Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.172782 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-secret-volume\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.173151 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzwq\" (UniqueName: \"kubernetes.io/projected/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-kube-api-access-rrzwq\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.173199 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-config-volume\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.275209 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-config-volume\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.275333 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-secret-volume\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.275414 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrzwq\" (UniqueName: \"kubernetes.io/projected/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-kube-api-access-rrzwq\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.276379 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-config-volume\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.282555 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-secret-volume\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.291430 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrzwq\" (UniqueName: \"kubernetes.io/projected/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-kube-api-access-rrzwq\") pod \"collect-profiles-29398080-xfncf\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.462264 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.879308 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf"] Nov 23 08:00:00 crc kubenswrapper[5028]: I1123 08:00:00.980907 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" event={"ID":"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef","Type":"ContainerStarted","Data":"a3fe48ac946c51cd4905bde88ae7776c17f4db39a20d8ba80d7b182d6b04003d"} Nov 23 08:00:01 crc kubenswrapper[5028]: I1123 08:00:01.988366 5028 generic.go:334] "Generic (PLEG): container finished" podID="7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" containerID="8b9895524d07ca5d47c52dcb30b556d2d00ab23367b4d2e3a3ce76c7eaf8cff0" exitCode=0 Nov 23 08:00:01 crc kubenswrapper[5028]: I1123 08:00:01.988408 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" event={"ID":"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef","Type":"ContainerDied","Data":"8b9895524d07ca5d47c52dcb30b556d2d00ab23367b4d2e3a3ce76c7eaf8cff0"} Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.234814 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.411785 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-config-volume\") pod \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.412162 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrzwq\" (UniqueName: \"kubernetes.io/projected/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-kube-api-access-rrzwq\") pod \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.412262 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-secret-volume\") pod \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\" (UID: \"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef\") " Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.412808 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" (UID: "7fe0a787-6771-41dd-a8a4-32a53fe4c5ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.417326 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" (UID: "7fe0a787-6771-41dd-a8a4-32a53fe4c5ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.419151 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-kube-api-access-rrzwq" (OuterVolumeSpecName: "kube-api-access-rrzwq") pod "7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" (UID: "7fe0a787-6771-41dd-a8a4-32a53fe4c5ef"). InnerVolumeSpecName "kube-api-access-rrzwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.514171 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.514210 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrzwq\" (UniqueName: \"kubernetes.io/projected/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-kube-api-access-rrzwq\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:03 crc kubenswrapper[5028]: I1123 08:00:03.514225 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:04 crc kubenswrapper[5028]: I1123 08:00:04.002109 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" event={"ID":"7fe0a787-6771-41dd-a8a4-32a53fe4c5ef","Type":"ContainerDied","Data":"a3fe48ac946c51cd4905bde88ae7776c17f4db39a20d8ba80d7b182d6b04003d"} Nov 23 08:00:04 crc kubenswrapper[5028]: I1123 08:00:04.002414 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3fe48ac946c51cd4905bde88ae7776c17f4db39a20d8ba80d7b182d6b04003d" Nov 23 08:00:04 crc kubenswrapper[5028]: I1123 08:00:04.002173 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf" Nov 23 08:00:04 crc kubenswrapper[5028]: I1123 08:00:04.309691 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t"] Nov 23 08:00:04 crc kubenswrapper[5028]: I1123 08:00:04.315304 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398035-rck8t"] Nov 23 08:00:05 crc kubenswrapper[5028]: I1123 08:00:05.068268 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2efdbebd-cdbd-429a-bb93-c1983c87b38c" path="/var/lib/kubelet/pods/2efdbebd-cdbd-429a-bb93-c1983c87b38c/volumes" Nov 23 08:00:23 crc kubenswrapper[5028]: I1123 08:00:23.816663 5028 scope.go:117] "RemoveContainer" containerID="6af93a82f17c6f49cf96b012bceaabf7f525eaf7fb276c031c37810456fa3310" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.708136 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gfpsn"] Nov 23 08:00:33 crc kubenswrapper[5028]: E1123 08:00:33.709151 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" containerName="collect-profiles" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.709174 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" containerName="collect-profiles" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.709411 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" containerName="collect-profiles" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.715373 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.743464 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfpsn"] Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.845381 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-catalog-content\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.845714 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-utilities\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.845806 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2h9q\" (UniqueName: \"kubernetes.io/projected/d277572c-7c9c-4d45-bd9d-861d6a2001bf-kube-api-access-q2h9q\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.947633 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-utilities\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.947964 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2h9q\" (UniqueName: \"kubernetes.io/projected/d277572c-7c9c-4d45-bd9d-861d6a2001bf-kube-api-access-q2h9q\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.948136 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-catalog-content\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.948184 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-utilities\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.948439 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-catalog-content\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:33 crc kubenswrapper[5028]: I1123 08:00:33.971551 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2h9q\" (UniqueName: \"kubernetes.io/projected/d277572c-7c9c-4d45-bd9d-861d6a2001bf-kube-api-access-q2h9q\") pod \"redhat-marketplace-gfpsn\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:34 crc kubenswrapper[5028]: I1123 08:00:34.040689 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:35 crc kubenswrapper[5028]: I1123 08:00:35.035391 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfpsn"] Nov 23 08:00:35 crc kubenswrapper[5028]: W1123 08:00:35.043654 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd277572c_7c9c_4d45_bd9d_861d6a2001bf.slice/crio-5e92c1e5beb66153318ed615205df7ad2c72e503011fdd726f2574137a44a52c WatchSource:0}: Error finding container 5e92c1e5beb66153318ed615205df7ad2c72e503011fdd726f2574137a44a52c: Status 404 returned error can't find the container with id 5e92c1e5beb66153318ed615205df7ad2c72e503011fdd726f2574137a44a52c Nov 23 08:00:35 crc kubenswrapper[5028]: I1123 08:00:35.270718 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerStarted","Data":"659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3"} Nov 23 08:00:35 crc kubenswrapper[5028]: I1123 08:00:35.271230 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerStarted","Data":"5e92c1e5beb66153318ed615205df7ad2c72e503011fdd726f2574137a44a52c"} Nov 23 08:00:36 crc kubenswrapper[5028]: I1123 08:00:36.284039 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerDied","Data":"659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3"} Nov 23 08:00:36 crc kubenswrapper[5028]: I1123 08:00:36.284059 5028 generic.go:334] "Generic (PLEG): container finished" podID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerID="659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3" exitCode=0 Nov 23 08:00:38 crc kubenswrapper[5028]: I1123 08:00:38.302505 5028 generic.go:334] "Generic (PLEG): container finished" podID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerID="4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f" exitCode=0 Nov 23 08:00:38 crc kubenswrapper[5028]: I1123 08:00:38.302730 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerDied","Data":"4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f"} Nov 23 08:00:39 crc kubenswrapper[5028]: I1123 08:00:39.312576 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerStarted","Data":"4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50"} Nov 23 08:00:39 crc kubenswrapper[5028]: I1123 08:00:39.335091 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gfpsn" podStartSLOduration=3.914380983 podStartE2EDuration="6.335073937s" podCreationTimestamp="2025-11-23 08:00:33 +0000 UTC" firstStartedPulling="2025-11-23 08:00:36.290728123 +0000 UTC m=+4219.988132912" lastFinishedPulling="2025-11-23 08:00:38.711421057 +0000 UTC m=+4222.408825866" observedRunningTime="2025-11-23 08:00:39.332117205 +0000 UTC m=+4223.029521984" watchObservedRunningTime="2025-11-23 08:00:39.335073937 +0000 UTC m=+4223.032478716" Nov 23 08:00:44 crc kubenswrapper[5028]: I1123 08:00:44.041938 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:44 crc kubenswrapper[5028]: I1123 08:00:44.042490 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:44 crc kubenswrapper[5028]: I1123 08:00:44.086055 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:44 crc kubenswrapper[5028]: I1123 08:00:44.398119 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:44 crc kubenswrapper[5028]: I1123 08:00:44.449436 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfpsn"] Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.381087 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gfpsn" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="registry-server" containerID="cri-o://4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50" gracePeriod=2 Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.753717 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.761633 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2h9q\" (UniqueName: \"kubernetes.io/projected/d277572c-7c9c-4d45-bd9d-861d6a2001bf-kube-api-access-q2h9q\") pod \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.761757 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-catalog-content\") pod \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.761819 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-utilities\") pod \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\" (UID: \"d277572c-7c9c-4d45-bd9d-861d6a2001bf\") " Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.763177 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-utilities" (OuterVolumeSpecName: "utilities") pod "d277572c-7c9c-4d45-bd9d-861d6a2001bf" (UID: "d277572c-7c9c-4d45-bd9d-861d6a2001bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.769478 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d277572c-7c9c-4d45-bd9d-861d6a2001bf-kube-api-access-q2h9q" (OuterVolumeSpecName: "kube-api-access-q2h9q") pod "d277572c-7c9c-4d45-bd9d-861d6a2001bf" (UID: "d277572c-7c9c-4d45-bd9d-861d6a2001bf"). InnerVolumeSpecName "kube-api-access-q2h9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.787394 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d277572c-7c9c-4d45-bd9d-861d6a2001bf" (UID: "d277572c-7c9c-4d45-bd9d-861d6a2001bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.863336 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2h9q\" (UniqueName: \"kubernetes.io/projected/d277572c-7c9c-4d45-bd9d-861d6a2001bf-kube-api-access-q2h9q\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.863372 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:46 crc kubenswrapper[5028]: I1123 08:00:46.863384 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d277572c-7c9c-4d45-bd9d-861d6a2001bf-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.389470 5028 generic.go:334] "Generic (PLEG): container finished" podID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerID="4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50" exitCode=0 Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.389535 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerDied","Data":"4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50"} Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.389901 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gfpsn" event={"ID":"d277572c-7c9c-4d45-bd9d-861d6a2001bf","Type":"ContainerDied","Data":"5e92c1e5beb66153318ed615205df7ad2c72e503011fdd726f2574137a44a52c"} Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.389922 5028 scope.go:117] "RemoveContainer" containerID="4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.389545 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gfpsn" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.418615 5028 scope.go:117] "RemoveContainer" containerID="4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.419919 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfpsn"] Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.434497 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gfpsn"] Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.439224 5028 scope.go:117] "RemoveContainer" containerID="659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.463041 5028 scope.go:117] "RemoveContainer" containerID="4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50" Nov 23 08:00:47 crc kubenswrapper[5028]: E1123 08:00:47.463515 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50\": container with ID starting with 4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50 not found: ID does not exist" containerID="4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.463563 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50"} err="failed to get container status \"4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50\": rpc error: code = NotFound desc = could not find container \"4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50\": container with ID starting with 4d92049405db93d19dceac4dc66b7d69f94a838db82a423aee47a2f5d29a0f50 not found: ID does not exist" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.463598 5028 scope.go:117] "RemoveContainer" containerID="4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f" Nov 23 08:00:47 crc kubenswrapper[5028]: E1123 08:00:47.463844 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f\": container with ID starting with 4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f not found: ID does not exist" containerID="4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.463884 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f"} err="failed to get container status \"4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f\": rpc error: code = NotFound desc = could not find container \"4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f\": container with ID starting with 4d5f3b7c9508368d331f0ab0067f4987bb152a460174bdb7aa92326da7898d7f not found: ID does not exist" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.463928 5028 scope.go:117] "RemoveContainer" containerID="659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3" Nov 23 08:00:47 crc kubenswrapper[5028]: E1123 08:00:47.464512 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3\": container with ID starting with 659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3 not found: ID does not exist" containerID="659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3" Nov 23 08:00:47 crc kubenswrapper[5028]: I1123 08:00:47.464545 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3"} err="failed to get container status \"659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3\": rpc error: code = NotFound desc = could not find container \"659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3\": container with ID starting with 659d274531b051eabe9aa31ad2541eee311c3816dcaa3c6585c86a57fead2fd3 not found: ID does not exist" Nov 23 08:00:49 crc kubenswrapper[5028]: I1123 08:00:49.065644 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" path="/var/lib/kubelet/pods/d277572c-7c9c-4d45-bd9d-861d6a2001bf/volumes" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.127837 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xd6tc"] Nov 23 08:01:59 crc kubenswrapper[5028]: E1123 08:01:59.129290 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="registry-server" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.129323 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="registry-server" Nov 23 08:01:59 crc kubenswrapper[5028]: E1123 08:01:59.129372 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="extract-utilities" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.129391 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="extract-utilities" Nov 23 08:01:59 crc kubenswrapper[5028]: E1123 08:01:59.129436 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="extract-content" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.129448 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="extract-content" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.129802 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d277572c-7c9c-4d45-bd9d-861d6a2001bf" containerName="registry-server" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.132036 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.142036 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd6tc"] Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.224907 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvvx\" (UniqueName: \"kubernetes.io/projected/0f1a7bed-22e3-4105-9a75-1323fed197ba-kube-api-access-xwvvx\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.225017 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-catalog-content\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.225550 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-utilities\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.326804 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-catalog-content\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.326908 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-utilities\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.326990 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvvx\" (UniqueName: \"kubernetes.io/projected/0f1a7bed-22e3-4105-9a75-1323fed197ba-kube-api-access-xwvvx\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.327395 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-catalog-content\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.327527 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-utilities\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.351557 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvvx\" (UniqueName: \"kubernetes.io/projected/0f1a7bed-22e3-4105-9a75-1323fed197ba-kube-api-access-xwvvx\") pod \"certified-operators-xd6tc\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.460704 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:01:59 crc kubenswrapper[5028]: I1123 08:01:59.916197 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xd6tc"] Nov 23 08:02:00 crc kubenswrapper[5028]: I1123 08:02:00.093942 5028 generic.go:334] "Generic (PLEG): container finished" podID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerID="fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171" exitCode=0 Nov 23 08:02:00 crc kubenswrapper[5028]: I1123 08:02:00.093999 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerDied","Data":"fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171"} Nov 23 08:02:00 crc kubenswrapper[5028]: I1123 08:02:00.094024 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerStarted","Data":"50b9204d428b1756ffc38721c148f047636701467c26f9f4dd1d4b4f165e5d93"} Nov 23 08:02:00 crc kubenswrapper[5028]: I1123 08:02:00.095655 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:02:00 crc kubenswrapper[5028]: I1123 08:02:00.946111 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:02:00 crc kubenswrapper[5028]: I1123 08:02:00.946447 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:02:01 crc kubenswrapper[5028]: I1123 08:02:01.102257 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerStarted","Data":"b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043"} Nov 23 08:02:02 crc kubenswrapper[5028]: I1123 08:02:02.112428 5028 generic.go:334] "Generic (PLEG): container finished" podID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerID="b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043" exitCode=0 Nov 23 08:02:02 crc kubenswrapper[5028]: I1123 08:02:02.112587 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerDied","Data":"b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043"} Nov 23 08:02:03 crc kubenswrapper[5028]: I1123 08:02:03.123162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerStarted","Data":"a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e"} Nov 23 08:02:03 crc kubenswrapper[5028]: I1123 08:02:03.154650 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xd6tc" podStartSLOduration=1.487808646 podStartE2EDuration="4.154630361s" podCreationTimestamp="2025-11-23 08:01:59 +0000 UTC" firstStartedPulling="2025-11-23 08:02:00.095439536 +0000 UTC m=+4303.792844315" lastFinishedPulling="2025-11-23 08:02:02.762261211 +0000 UTC m=+4306.459666030" observedRunningTime="2025-11-23 08:02:03.144540664 +0000 UTC m=+4306.841945453" watchObservedRunningTime="2025-11-23 08:02:03.154630361 +0000 UTC m=+4306.852035150" Nov 23 08:02:09 crc kubenswrapper[5028]: I1123 08:02:09.461172 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:02:09 crc kubenswrapper[5028]: I1123 08:02:09.461759 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:02:09 crc kubenswrapper[5028]: I1123 08:02:09.503182 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:02:10 crc kubenswrapper[5028]: I1123 08:02:10.230779 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:02:10 crc kubenswrapper[5028]: I1123 08:02:10.278751 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd6tc"] Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.203065 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xd6tc" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="registry-server" containerID="cri-o://a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e" gracePeriod=2 Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.666453 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.740477 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvvx\" (UniqueName: \"kubernetes.io/projected/0f1a7bed-22e3-4105-9a75-1323fed197ba-kube-api-access-xwvvx\") pod \"0f1a7bed-22e3-4105-9a75-1323fed197ba\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.740613 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-utilities\") pod \"0f1a7bed-22e3-4105-9a75-1323fed197ba\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.740695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-catalog-content\") pod \"0f1a7bed-22e3-4105-9a75-1323fed197ba\" (UID: \"0f1a7bed-22e3-4105-9a75-1323fed197ba\") " Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.742086 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-utilities" (OuterVolumeSpecName: "utilities") pod "0f1a7bed-22e3-4105-9a75-1323fed197ba" (UID: "0f1a7bed-22e3-4105-9a75-1323fed197ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.748259 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1a7bed-22e3-4105-9a75-1323fed197ba-kube-api-access-xwvvx" (OuterVolumeSpecName: "kube-api-access-xwvvx") pod "0f1a7bed-22e3-4105-9a75-1323fed197ba" (UID: "0f1a7bed-22e3-4105-9a75-1323fed197ba"). InnerVolumeSpecName "kube-api-access-xwvvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.797564 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f1a7bed-22e3-4105-9a75-1323fed197ba" (UID: "0f1a7bed-22e3-4105-9a75-1323fed197ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.842689 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.842727 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvvx\" (UniqueName: \"kubernetes.io/projected/0f1a7bed-22e3-4105-9a75-1323fed197ba-kube-api-access-xwvvx\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:12 crc kubenswrapper[5028]: I1123 08:02:12.842738 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f1a7bed-22e3-4105-9a75-1323fed197ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.218849 5028 generic.go:334] "Generic (PLEG): container finished" podID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerID="a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e" exitCode=0 Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.219024 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerDied","Data":"a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e"} Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.221458 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xd6tc" event={"ID":"0f1a7bed-22e3-4105-9a75-1323fed197ba","Type":"ContainerDied","Data":"50b9204d428b1756ffc38721c148f047636701467c26f9f4dd1d4b4f165e5d93"} Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.219073 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xd6tc" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.221515 5028 scope.go:117] "RemoveContainer" containerID="a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.252804 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xd6tc"] Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.253693 5028 scope.go:117] "RemoveContainer" containerID="b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.263938 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xd6tc"] Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.298985 5028 scope.go:117] "RemoveContainer" containerID="fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.315934 5028 scope.go:117] "RemoveContainer" containerID="a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e" Nov 23 08:02:13 crc kubenswrapper[5028]: E1123 08:02:13.316469 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e\": container with ID starting with a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e not found: ID does not exist" containerID="a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.316512 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e"} err="failed to get container status \"a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e\": rpc error: code = NotFound desc = could not find container \"a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e\": container with ID starting with a415475b6a233e219f082d4507bff53787f9dfa047a1b8752217363c2c939d9e not found: ID does not exist" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.316537 5028 scope.go:117] "RemoveContainer" containerID="b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043" Nov 23 08:02:13 crc kubenswrapper[5028]: E1123 08:02:13.316907 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043\": container with ID starting with b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043 not found: ID does not exist" containerID="b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.317000 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043"} err="failed to get container status \"b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043\": rpc error: code = NotFound desc = could not find container \"b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043\": container with ID starting with b744ec230366979964e2ba38350a23099fac07601baa06ae0af9d39d01788043 not found: ID does not exist" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.317070 5028 scope.go:117] "RemoveContainer" containerID="fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171" Nov 23 08:02:13 crc kubenswrapper[5028]: E1123 08:02:13.317536 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171\": container with ID starting with fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171 not found: ID does not exist" containerID="fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171" Nov 23 08:02:13 crc kubenswrapper[5028]: I1123 08:02:13.317578 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171"} err="failed to get container status \"fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171\": rpc error: code = NotFound desc = could not find container \"fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171\": container with ID starting with fb6634765f99230bd42d7091e87acc8949b8e430ac37d5591407b2cea988f171 not found: ID does not exist" Nov 23 08:02:15 crc kubenswrapper[5028]: I1123 08:02:15.061856 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" path="/var/lib/kubelet/pods/0f1a7bed-22e3-4105-9a75-1323fed197ba/volumes" Nov 23 08:02:30 crc kubenswrapper[5028]: I1123 08:02:30.946074 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:02:30 crc kubenswrapper[5028]: I1123 08:02:30.946674 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:03:00 crc kubenswrapper[5028]: I1123 08:03:00.946076 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:03:00 crc kubenswrapper[5028]: I1123 08:03:00.946634 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:03:00 crc kubenswrapper[5028]: I1123 08:03:00.946701 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:03:00 crc kubenswrapper[5028]: I1123 08:03:00.947349 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c852ecc9c696605a1433a305a9de01b89ea896af873c5abbc93689d6fe678571"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:03:00 crc kubenswrapper[5028]: I1123 08:03:00.947406 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://c852ecc9c696605a1433a305a9de01b89ea896af873c5abbc93689d6fe678571" gracePeriod=600 Nov 23 08:03:01 crc kubenswrapper[5028]: I1123 08:03:01.633602 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="c852ecc9c696605a1433a305a9de01b89ea896af873c5abbc93689d6fe678571" exitCode=0 Nov 23 08:03:01 crc kubenswrapper[5028]: I1123 08:03:01.633679 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"c852ecc9c696605a1433a305a9de01b89ea896af873c5abbc93689d6fe678571"} Nov 23 08:03:01 crc kubenswrapper[5028]: I1123 08:03:01.634134 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa"} Nov 23 08:03:01 crc kubenswrapper[5028]: I1123 08:03:01.634155 5028 scope.go:117] "RemoveContainer" containerID="2951af2a0c472a1a2f54fcf4a45bff231dbc4b7ffde320852c2418049ef788fc" Nov 23 08:05:30 crc kubenswrapper[5028]: I1123 08:05:30.946317 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:05:30 crc kubenswrapper[5028]: I1123 08:05:30.946911 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:06:00 crc kubenswrapper[5028]: I1123 08:06:00.946049 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:06:00 crc kubenswrapper[5028]: I1123 08:06:00.946487 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:06:30 crc kubenswrapper[5028]: I1123 08:06:30.946581 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:06:30 crc kubenswrapper[5028]: I1123 08:06:30.947261 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:06:30 crc kubenswrapper[5028]: I1123 08:06:30.947326 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:06:30 crc kubenswrapper[5028]: I1123 08:06:30.948253 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:06:30 crc kubenswrapper[5028]: I1123 08:06:30.948362 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" gracePeriod=600 Nov 23 08:06:31 crc kubenswrapper[5028]: E1123 08:06:31.071078 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:06:31 crc kubenswrapper[5028]: I1123 08:06:31.346644 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" exitCode=0 Nov 23 08:06:31 crc kubenswrapper[5028]: I1123 08:06:31.346697 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa"} Nov 23 08:06:31 crc kubenswrapper[5028]: I1123 08:06:31.346741 5028 scope.go:117] "RemoveContainer" containerID="c852ecc9c696605a1433a305a9de01b89ea896af873c5abbc93689d6fe678571" Nov 23 08:06:31 crc kubenswrapper[5028]: I1123 08:06:31.347232 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:06:31 crc kubenswrapper[5028]: E1123 08:06:31.347498 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:06:46 crc kubenswrapper[5028]: I1123 08:06:46.053624 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:06:46 crc kubenswrapper[5028]: E1123 08:06:46.055012 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:06:59 crc kubenswrapper[5028]: I1123 08:06:59.053130 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:06:59 crc kubenswrapper[5028]: E1123 08:06:59.054198 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:07:11 crc kubenswrapper[5028]: I1123 08:07:11.052928 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:07:11 crc kubenswrapper[5028]: E1123 08:07:11.053681 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:07:23 crc kubenswrapper[5028]: I1123 08:07:23.052904 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:07:23 crc kubenswrapper[5028]: E1123 08:07:23.053641 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:07:34 crc kubenswrapper[5028]: I1123 08:07:34.053216 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:07:34 crc kubenswrapper[5028]: E1123 08:07:34.054189 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:07:47 crc kubenswrapper[5028]: I1123 08:07:47.060983 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:07:47 crc kubenswrapper[5028]: E1123 08:07:47.062060 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:07:58 crc kubenswrapper[5028]: I1123 08:07:58.052930 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:07:58 crc kubenswrapper[5028]: E1123 08:07:58.053704 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:08:09 crc kubenswrapper[5028]: I1123 08:08:09.052923 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:08:09 crc kubenswrapper[5028]: E1123 08:08:09.053770 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.869635 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kwggw"] Nov 23 08:08:20 crc kubenswrapper[5028]: E1123 08:08:20.871513 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="extract-utilities" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.871538 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="extract-utilities" Nov 23 08:08:20 crc kubenswrapper[5028]: E1123 08:08:20.871555 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="extract-content" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.871563 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="extract-content" Nov 23 08:08:20 crc kubenswrapper[5028]: E1123 08:08:20.871608 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="registry-server" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.871617 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="registry-server" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.871792 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1a7bed-22e3-4105-9a75-1323fed197ba" containerName="registry-server" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.873460 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:20 crc kubenswrapper[5028]: I1123 08:08:20.885624 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwggw"] Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.031694 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-catalog-content\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.031737 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gsn\" (UniqueName: \"kubernetes.io/projected/45e4d58a-4bcc-4576-b637-9c42a86193fd-kube-api-access-25gsn\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.031781 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-utilities\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.132932 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-utilities\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.133172 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-catalog-content\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.133218 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25gsn\" (UniqueName: \"kubernetes.io/projected/45e4d58a-4bcc-4576-b637-9c42a86193fd-kube-api-access-25gsn\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.133470 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-utilities\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.133515 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-catalog-content\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.158269 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25gsn\" (UniqueName: \"kubernetes.io/projected/45e4d58a-4bcc-4576-b637-9c42a86193fd-kube-api-access-25gsn\") pod \"community-operators-kwggw\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.236547 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:21 crc kubenswrapper[5028]: I1123 08:08:21.701382 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwggw"] Nov 23 08:08:22 crc kubenswrapper[5028]: I1123 08:08:22.269909 5028 generic.go:334] "Generic (PLEG): container finished" podID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerID="32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896" exitCode=0 Nov 23 08:08:22 crc kubenswrapper[5028]: I1123 08:08:22.269992 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwggw" event={"ID":"45e4d58a-4bcc-4576-b637-9c42a86193fd","Type":"ContainerDied","Data":"32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896"} Nov 23 08:08:22 crc kubenswrapper[5028]: I1123 08:08:22.270212 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwggw" event={"ID":"45e4d58a-4bcc-4576-b637-9c42a86193fd","Type":"ContainerStarted","Data":"a2ded29f8f59af9778111e7d68a3fb6447dd7873df869a230c6b123eafeeb940"} Nov 23 08:08:22 crc kubenswrapper[5028]: I1123 08:08:22.271902 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:08:24 crc kubenswrapper[5028]: I1123 08:08:24.052508 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:08:24 crc kubenswrapper[5028]: E1123 08:08:24.053024 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:08:24 crc kubenswrapper[5028]: I1123 08:08:24.289474 5028 generic.go:334] "Generic (PLEG): container finished" podID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerID="7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620" exitCode=0 Nov 23 08:08:24 crc kubenswrapper[5028]: I1123 08:08:24.289522 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwggw" event={"ID":"45e4d58a-4bcc-4576-b637-9c42a86193fd","Type":"ContainerDied","Data":"7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620"} Nov 23 08:08:25 crc kubenswrapper[5028]: I1123 08:08:25.299154 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwggw" event={"ID":"45e4d58a-4bcc-4576-b637-9c42a86193fd","Type":"ContainerStarted","Data":"2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3"} Nov 23 08:08:25 crc kubenswrapper[5028]: I1123 08:08:25.322149 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kwggw" podStartSLOduration=2.923462475 podStartE2EDuration="5.322131174s" podCreationTimestamp="2025-11-23 08:08:20 +0000 UTC" firstStartedPulling="2025-11-23 08:08:22.271725635 +0000 UTC m=+4685.969130414" lastFinishedPulling="2025-11-23 08:08:24.670394324 +0000 UTC m=+4688.367799113" observedRunningTime="2025-11-23 08:08:25.31786871 +0000 UTC m=+4689.015273509" watchObservedRunningTime="2025-11-23 08:08:25.322131174 +0000 UTC m=+4689.019535953" Nov 23 08:08:31 crc kubenswrapper[5028]: I1123 08:08:31.237306 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:31 crc kubenswrapper[5028]: I1123 08:08:31.239139 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:31 crc kubenswrapper[5028]: I1123 08:08:31.286334 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:31 crc kubenswrapper[5028]: I1123 08:08:31.405112 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:31 crc kubenswrapper[5028]: I1123 08:08:31.513704 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwggw"] Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.359729 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kwggw" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="registry-server" containerID="cri-o://2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3" gracePeriod=2 Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.745844 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.823420 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-utilities\") pod \"45e4d58a-4bcc-4576-b637-9c42a86193fd\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.823556 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-catalog-content\") pod \"45e4d58a-4bcc-4576-b637-9c42a86193fd\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.823648 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25gsn\" (UniqueName: \"kubernetes.io/projected/45e4d58a-4bcc-4576-b637-9c42a86193fd-kube-api-access-25gsn\") pod \"45e4d58a-4bcc-4576-b637-9c42a86193fd\" (UID: \"45e4d58a-4bcc-4576-b637-9c42a86193fd\") " Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.824728 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-utilities" (OuterVolumeSpecName: "utilities") pod "45e4d58a-4bcc-4576-b637-9c42a86193fd" (UID: "45e4d58a-4bcc-4576-b637-9c42a86193fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.829716 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e4d58a-4bcc-4576-b637-9c42a86193fd-kube-api-access-25gsn" (OuterVolumeSpecName: "kube-api-access-25gsn") pod "45e4d58a-4bcc-4576-b637-9c42a86193fd" (UID: "45e4d58a-4bcc-4576-b637-9c42a86193fd"). InnerVolumeSpecName "kube-api-access-25gsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.924921 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kmjd"] Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.925047 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25gsn\" (UniqueName: \"kubernetes.io/projected/45e4d58a-4bcc-4576-b637-9c42a86193fd-kube-api-access-25gsn\") on node \"crc\" DevicePath \"\"" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.925065 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:08:33 crc kubenswrapper[5028]: E1123 08:08:33.925264 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="registry-server" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.925276 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="registry-server" Nov 23 08:08:33 crc kubenswrapper[5028]: E1123 08:08:33.925303 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="extract-content" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.925309 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="extract-content" Nov 23 08:08:33 crc kubenswrapper[5028]: E1123 08:08:33.925320 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="extract-utilities" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.925326 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="extract-utilities" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.925452 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerName="registry-server" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.926455 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:33 crc kubenswrapper[5028]: I1123 08:08:33.932246 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kmjd"] Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.026066 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rnrs\" (UniqueName: \"kubernetes.io/projected/250ea00d-0d53-49ed-b786-b5a313c39288-kube-api-access-2rnrs\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.026121 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-catalog-content\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.026162 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-utilities\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.030281 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45e4d58a-4bcc-4576-b637-9c42a86193fd" (UID: "45e4d58a-4bcc-4576-b637-9c42a86193fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.127557 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rnrs\" (UniqueName: \"kubernetes.io/projected/250ea00d-0d53-49ed-b786-b5a313c39288-kube-api-access-2rnrs\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.128319 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-catalog-content\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.128329 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-catalog-content\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.128451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-utilities\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.128770 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-utilities\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.129060 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45e4d58a-4bcc-4576-b637-9c42a86193fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.147687 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rnrs\" (UniqueName: \"kubernetes.io/projected/250ea00d-0d53-49ed-b786-b5a313c39288-kube-api-access-2rnrs\") pod \"redhat-operators-8kmjd\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.281411 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.367156 5028 generic.go:334] "Generic (PLEG): container finished" podID="45e4d58a-4bcc-4576-b637-9c42a86193fd" containerID="2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3" exitCode=0 Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.367205 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwggw" event={"ID":"45e4d58a-4bcc-4576-b637-9c42a86193fd","Type":"ContainerDied","Data":"2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3"} Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.367231 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwggw" event={"ID":"45e4d58a-4bcc-4576-b637-9c42a86193fd","Type":"ContainerDied","Data":"a2ded29f8f59af9778111e7d68a3fb6447dd7873df869a230c6b123eafeeb940"} Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.367249 5028 scope.go:117] "RemoveContainer" containerID="2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.367363 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwggw" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.393339 5028 scope.go:117] "RemoveContainer" containerID="7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.682597 5028 scope.go:117] "RemoveContainer" containerID="32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.709766 5028 scope.go:117] "RemoveContainer" containerID="2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.712680 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwggw"] Nov 23 08:08:34 crc kubenswrapper[5028]: E1123 08:08:34.721102 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3\": container with ID starting with 2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3 not found: ID does not exist" containerID="2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.721147 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3"} err="failed to get container status \"2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3\": rpc error: code = NotFound desc = could not find container \"2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3\": container with ID starting with 2d192e2e261be53b062ff07a085d6ecf5ba950845e4692c9ba0eff50195957e3 not found: ID does not exist" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.721178 5028 scope.go:117] "RemoveContainer" containerID="7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620" Nov 23 08:08:34 crc kubenswrapper[5028]: E1123 08:08:34.728033 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620\": container with ID starting with 7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620 not found: ID does not exist" containerID="7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.728091 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620"} err="failed to get container status \"7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620\": rpc error: code = NotFound desc = could not find container \"7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620\": container with ID starting with 7827424d6f01749135d6f1f86fcfb8d6152f538cefa6fc4f37754f81e19c0620 not found: ID does not exist" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.728115 5028 scope.go:117] "RemoveContainer" containerID="32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.736153 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kwggw"] Nov 23 08:08:34 crc kubenswrapper[5028]: E1123 08:08:34.741163 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896\": container with ID starting with 32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896 not found: ID does not exist" containerID="32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896" Nov 23 08:08:34 crc kubenswrapper[5028]: I1123 08:08:34.741204 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896"} err="failed to get container status \"32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896\": rpc error: code = NotFound desc = could not find container \"32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896\": container with ID starting with 32779852033f64e90e44c1379da29fb7ce41370605b5091a05935d70b0919896 not found: ID does not exist" Nov 23 08:08:35 crc kubenswrapper[5028]: I1123 08:08:35.088790 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e4d58a-4bcc-4576-b637-9c42a86193fd" path="/var/lib/kubelet/pods/45e4d58a-4bcc-4576-b637-9c42a86193fd/volumes" Nov 23 08:08:35 crc kubenswrapper[5028]: I1123 08:08:35.123638 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kmjd"] Nov 23 08:08:35 crc kubenswrapper[5028]: W1123 08:08:35.130293 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250ea00d_0d53_49ed_b786_b5a313c39288.slice/crio-980867b34a9795178c3362780fbfcbd8ef6851abc909e83fcce096dc02525126 WatchSource:0}: Error finding container 980867b34a9795178c3362780fbfcbd8ef6851abc909e83fcce096dc02525126: Status 404 returned error can't find the container with id 980867b34a9795178c3362780fbfcbd8ef6851abc909e83fcce096dc02525126 Nov 23 08:08:35 crc kubenswrapper[5028]: I1123 08:08:35.376862 5028 generic.go:334] "Generic (PLEG): container finished" podID="250ea00d-0d53-49ed-b786-b5a313c39288" containerID="ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2" exitCode=0 Nov 23 08:08:35 crc kubenswrapper[5028]: I1123 08:08:35.376935 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerDied","Data":"ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2"} Nov 23 08:08:35 crc kubenswrapper[5028]: I1123 08:08:35.377514 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerStarted","Data":"980867b34a9795178c3362780fbfcbd8ef6851abc909e83fcce096dc02525126"} Nov 23 08:08:36 crc kubenswrapper[5028]: I1123 08:08:36.390582 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerStarted","Data":"5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e"} Nov 23 08:08:37 crc kubenswrapper[5028]: I1123 08:08:37.399239 5028 generic.go:334] "Generic (PLEG): container finished" podID="250ea00d-0d53-49ed-b786-b5a313c39288" containerID="5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e" exitCode=0 Nov 23 08:08:37 crc kubenswrapper[5028]: I1123 08:08:37.399287 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerDied","Data":"5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e"} Nov 23 08:08:38 crc kubenswrapper[5028]: I1123 08:08:38.407468 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerStarted","Data":"06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e"} Nov 23 08:08:38 crc kubenswrapper[5028]: I1123 08:08:38.435604 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kmjd" podStartSLOduration=3.02546799 podStartE2EDuration="5.435583719s" podCreationTimestamp="2025-11-23 08:08:33 +0000 UTC" firstStartedPulling="2025-11-23 08:08:35.379521791 +0000 UTC m=+4699.076926590" lastFinishedPulling="2025-11-23 08:08:37.78963753 +0000 UTC m=+4701.487042319" observedRunningTime="2025-11-23 08:08:38.430544715 +0000 UTC m=+4702.127949494" watchObservedRunningTime="2025-11-23 08:08:38.435583719 +0000 UTC m=+4702.132988498" Nov 23 08:08:39 crc kubenswrapper[5028]: I1123 08:08:39.053299 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:08:39 crc kubenswrapper[5028]: E1123 08:08:39.053865 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:08:44 crc kubenswrapper[5028]: I1123 08:08:44.282805 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:44 crc kubenswrapper[5028]: I1123 08:08:44.283296 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:44 crc kubenswrapper[5028]: I1123 08:08:44.333790 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:44 crc kubenswrapper[5028]: I1123 08:08:44.505409 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:44 crc kubenswrapper[5028]: I1123 08:08:44.573405 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kmjd"] Nov 23 08:08:46 crc kubenswrapper[5028]: I1123 08:08:46.477320 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8kmjd" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="registry-server" containerID="cri-o://06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e" gracePeriod=2 Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.458704 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.487084 5028 generic.go:334] "Generic (PLEG): container finished" podID="250ea00d-0d53-49ed-b786-b5a313c39288" containerID="06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e" exitCode=0 Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.487136 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerDied","Data":"06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e"} Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.487191 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmjd" event={"ID":"250ea00d-0d53-49ed-b786-b5a313c39288","Type":"ContainerDied","Data":"980867b34a9795178c3362780fbfcbd8ef6851abc909e83fcce096dc02525126"} Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.487214 5028 scope.go:117] "RemoveContainer" containerID="06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.487351 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kmjd" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.506122 5028 scope.go:117] "RemoveContainer" containerID="5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.523331 5028 scope.go:117] "RemoveContainer" containerID="ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.544346 5028 scope.go:117] "RemoveContainer" containerID="06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e" Nov 23 08:08:47 crc kubenswrapper[5028]: E1123 08:08:47.544779 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e\": container with ID starting with 06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e not found: ID does not exist" containerID="06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.544810 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e"} err="failed to get container status \"06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e\": rpc error: code = NotFound desc = could not find container \"06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e\": container with ID starting with 06a5f4213869949aafd5d02e5765e86b7ca0d5e095bcd8f943b92fc53d2a1c5e not found: ID does not exist" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.544848 5028 scope.go:117] "RemoveContainer" containerID="5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e" Nov 23 08:08:47 crc kubenswrapper[5028]: E1123 08:08:47.545156 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e\": container with ID starting with 5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e not found: ID does not exist" containerID="5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.545193 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e"} err="failed to get container status \"5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e\": rpc error: code = NotFound desc = could not find container \"5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e\": container with ID starting with 5b68930a1dc47ad3ab6a4a882f380ced6ffcf8f99135013437a36872bb08658e not found: ID does not exist" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.545205 5028 scope.go:117] "RemoveContainer" containerID="ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2" Nov 23 08:08:47 crc kubenswrapper[5028]: E1123 08:08:47.545473 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2\": container with ID starting with ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2 not found: ID does not exist" containerID="ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.545529 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2"} err="failed to get container status \"ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2\": rpc error: code = NotFound desc = could not find container \"ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2\": container with ID starting with ef27714b022c7a2fb3ad44095f22680b98101d78a0e1490eaf892de76ac3fca2 not found: ID does not exist" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.656872 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-utilities\") pod \"250ea00d-0d53-49ed-b786-b5a313c39288\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.656975 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-catalog-content\") pod \"250ea00d-0d53-49ed-b786-b5a313c39288\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.657013 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rnrs\" (UniqueName: \"kubernetes.io/projected/250ea00d-0d53-49ed-b786-b5a313c39288-kube-api-access-2rnrs\") pod \"250ea00d-0d53-49ed-b786-b5a313c39288\" (UID: \"250ea00d-0d53-49ed-b786-b5a313c39288\") " Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.658559 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-utilities" (OuterVolumeSpecName: "utilities") pod "250ea00d-0d53-49ed-b786-b5a313c39288" (UID: "250ea00d-0d53-49ed-b786-b5a313c39288"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.668720 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250ea00d-0d53-49ed-b786-b5a313c39288-kube-api-access-2rnrs" (OuterVolumeSpecName: "kube-api-access-2rnrs") pod "250ea00d-0d53-49ed-b786-b5a313c39288" (UID: "250ea00d-0d53-49ed-b786-b5a313c39288"). InnerVolumeSpecName "kube-api-access-2rnrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.761025 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rnrs\" (UniqueName: \"kubernetes.io/projected/250ea00d-0d53-49ed-b786-b5a313c39288-kube-api-access-2rnrs\") on node \"crc\" DevicePath \"\"" Nov 23 08:08:47 crc kubenswrapper[5028]: I1123 08:08:47.761066 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:08:48 crc kubenswrapper[5028]: I1123 08:08:48.316794 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "250ea00d-0d53-49ed-b786-b5a313c39288" (UID: "250ea00d-0d53-49ed-b786-b5a313c39288"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:08:48 crc kubenswrapper[5028]: I1123 08:08:48.369708 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250ea00d-0d53-49ed-b786-b5a313c39288-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:08:48 crc kubenswrapper[5028]: I1123 08:08:48.421533 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8kmjd"] Nov 23 08:08:48 crc kubenswrapper[5028]: I1123 08:08:48.426992 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8kmjd"] Nov 23 08:08:49 crc kubenswrapper[5028]: I1123 08:08:49.060688 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" path="/var/lib/kubelet/pods/250ea00d-0d53-49ed-b786-b5a313c39288/volumes" Nov 23 08:08:51 crc kubenswrapper[5028]: I1123 08:08:51.053173 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:08:51 crc kubenswrapper[5028]: E1123 08:08:51.053803 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:09:04 crc kubenswrapper[5028]: I1123 08:09:04.053483 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:09:04 crc kubenswrapper[5028]: E1123 08:09:04.054299 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:09:16 crc kubenswrapper[5028]: I1123 08:09:16.053609 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:09:16 crc kubenswrapper[5028]: E1123 08:09:16.054705 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:09:31 crc kubenswrapper[5028]: I1123 08:09:31.053351 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:09:31 crc kubenswrapper[5028]: E1123 08:09:31.055857 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:09:45 crc kubenswrapper[5028]: I1123 08:09:45.052867 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:09:45 crc kubenswrapper[5028]: E1123 08:09:45.053621 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:09:58 crc kubenswrapper[5028]: I1123 08:09:58.054149 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:09:58 crc kubenswrapper[5028]: E1123 08:09:58.054929 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:10:12 crc kubenswrapper[5028]: I1123 08:10:12.053355 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:10:12 crc kubenswrapper[5028]: E1123 08:10:12.053999 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:10:24 crc kubenswrapper[5028]: I1123 08:10:24.053593 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:10:24 crc kubenswrapper[5028]: E1123 08:10:24.055363 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:10:37 crc kubenswrapper[5028]: I1123 08:10:37.061669 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:10:37 crc kubenswrapper[5028]: E1123 08:10:37.062419 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:10:48 crc kubenswrapper[5028]: I1123 08:10:48.053595 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:10:48 crc kubenswrapper[5028]: E1123 08:10:48.054502 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:11:00 crc kubenswrapper[5028]: I1123 08:11:00.053652 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:11:00 crc kubenswrapper[5028]: E1123 08:11:00.054756 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:11:13 crc kubenswrapper[5028]: I1123 08:11:13.053178 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:11:13 crc kubenswrapper[5028]: E1123 08:11:13.054266 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:11:25 crc kubenswrapper[5028]: I1123 08:11:25.054614 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:11:25 crc kubenswrapper[5028]: E1123 08:11:25.055903 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:11:40 crc kubenswrapper[5028]: I1123 08:11:40.053131 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:11:40 crc kubenswrapper[5028]: I1123 08:11:40.811867 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"d48657536c29e96eeecb7aae863205d9b149f37d9d8f20eef6f30a5d9505c89b"} Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.678818 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-czltt"] Nov 23 08:12:17 crc kubenswrapper[5028]: E1123 08:12:17.687301 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="extract-utilities" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.687320 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="extract-utilities" Nov 23 08:12:17 crc kubenswrapper[5028]: E1123 08:12:17.687340 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="registry-server" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.687348 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="registry-server" Nov 23 08:12:17 crc kubenswrapper[5028]: E1123 08:12:17.687358 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="extract-content" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.687366 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="extract-content" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.687534 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="250ea00d-0d53-49ed-b786-b5a313c39288" containerName="registry-server" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.688582 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.690545 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czltt"] Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.756369 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-utilities\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.756459 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-catalog-content\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.756507 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4glx\" (UniqueName: \"kubernetes.io/projected/92a6fa85-b69a-4779-bc6a-9320b011a278-kube-api-access-f4glx\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.858061 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-utilities\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.858138 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-catalog-content\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.858181 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4glx\" (UniqueName: \"kubernetes.io/projected/92a6fa85-b69a-4779-bc6a-9320b011a278-kube-api-access-f4glx\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.858722 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-catalog-content\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.858796 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-utilities\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:17 crc kubenswrapper[5028]: I1123 08:12:17.875558 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4glx\" (UniqueName: \"kubernetes.io/projected/92a6fa85-b69a-4779-bc6a-9320b011a278-kube-api-access-f4glx\") pod \"certified-operators-czltt\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:18 crc kubenswrapper[5028]: I1123 08:12:18.021854 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:18 crc kubenswrapper[5028]: I1123 08:12:18.292402 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czltt"] Nov 23 08:12:18 crc kubenswrapper[5028]: E1123 08:12:18.556253 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a6fa85_b69a_4779_bc6a_9320b011a278.slice/crio-conmon-452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a6fa85_b69a_4779_bc6a_9320b011a278.slice/crio-452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:12:19 crc kubenswrapper[5028]: I1123 08:12:19.118741 5028 generic.go:334] "Generic (PLEG): container finished" podID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerID="452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625" exitCode=0 Nov 23 08:12:19 crc kubenswrapper[5028]: I1123 08:12:19.118792 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czltt" event={"ID":"92a6fa85-b69a-4779-bc6a-9320b011a278","Type":"ContainerDied","Data":"452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625"} Nov 23 08:12:19 crc kubenswrapper[5028]: I1123 08:12:19.118815 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czltt" event={"ID":"92a6fa85-b69a-4779-bc6a-9320b011a278","Type":"ContainerStarted","Data":"28b41c118ef75b249836ffa2da1c64e5f60233ed2859c806c314038e11cbad60"} Nov 23 08:12:20 crc kubenswrapper[5028]: I1123 08:12:20.126061 5028 generic.go:334] "Generic (PLEG): container finished" podID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerID="71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f" exitCode=0 Nov 23 08:12:20 crc kubenswrapper[5028]: I1123 08:12:20.126115 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czltt" event={"ID":"92a6fa85-b69a-4779-bc6a-9320b011a278","Type":"ContainerDied","Data":"71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f"} Nov 23 08:12:21 crc kubenswrapper[5028]: I1123 08:12:21.138456 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czltt" event={"ID":"92a6fa85-b69a-4779-bc6a-9320b011a278","Type":"ContainerStarted","Data":"dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad"} Nov 23 08:12:21 crc kubenswrapper[5028]: I1123 08:12:21.168060 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-czltt" podStartSLOduration=2.728079874 podStartE2EDuration="4.168034156s" podCreationTimestamp="2025-11-23 08:12:17 +0000 UTC" firstStartedPulling="2025-11-23 08:12:19.120676838 +0000 UTC m=+4922.818081617" lastFinishedPulling="2025-11-23 08:12:20.56063112 +0000 UTC m=+4924.258035899" observedRunningTime="2025-11-23 08:12:21.159070477 +0000 UTC m=+4924.856475276" watchObservedRunningTime="2025-11-23 08:12:21.168034156 +0000 UTC m=+4924.865438965" Nov 23 08:12:28 crc kubenswrapper[5028]: I1123 08:12:28.022615 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:28 crc kubenswrapper[5028]: I1123 08:12:28.023102 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:28 crc kubenswrapper[5028]: I1123 08:12:28.091995 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:28 crc kubenswrapper[5028]: I1123 08:12:28.278423 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:28 crc kubenswrapper[5028]: I1123 08:12:28.335214 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czltt"] Nov 23 08:12:30 crc kubenswrapper[5028]: I1123 08:12:30.237299 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-czltt" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="registry-server" containerID="cri-o://dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad" gracePeriod=2 Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.123029 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.188250 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-catalog-content\") pod \"92a6fa85-b69a-4779-bc6a-9320b011a278\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.188451 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4glx\" (UniqueName: \"kubernetes.io/projected/92a6fa85-b69a-4779-bc6a-9320b011a278-kube-api-access-f4glx\") pod \"92a6fa85-b69a-4779-bc6a-9320b011a278\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.188564 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-utilities\") pod \"92a6fa85-b69a-4779-bc6a-9320b011a278\" (UID: \"92a6fa85-b69a-4779-bc6a-9320b011a278\") " Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.189517 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-utilities" (OuterVolumeSpecName: "utilities") pod "92a6fa85-b69a-4779-bc6a-9320b011a278" (UID: "92a6fa85-b69a-4779-bc6a-9320b011a278"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.194826 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a6fa85-b69a-4779-bc6a-9320b011a278-kube-api-access-f4glx" (OuterVolumeSpecName: "kube-api-access-f4glx") pod "92a6fa85-b69a-4779-bc6a-9320b011a278" (UID: "92a6fa85-b69a-4779-bc6a-9320b011a278"). InnerVolumeSpecName "kube-api-access-f4glx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.238154 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92a6fa85-b69a-4779-bc6a-9320b011a278" (UID: "92a6fa85-b69a-4779-bc6a-9320b011a278"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.249466 5028 generic.go:334] "Generic (PLEG): container finished" podID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerID="dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad" exitCode=0 Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.249543 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czltt" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.249563 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czltt" event={"ID":"92a6fa85-b69a-4779-bc6a-9320b011a278","Type":"ContainerDied","Data":"dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad"} Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.250126 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czltt" event={"ID":"92a6fa85-b69a-4779-bc6a-9320b011a278","Type":"ContainerDied","Data":"28b41c118ef75b249836ffa2da1c64e5f60233ed2859c806c314038e11cbad60"} Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.250147 5028 scope.go:117] "RemoveContainer" containerID="dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.273012 5028 scope.go:117] "RemoveContainer" containerID="71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.286235 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czltt"] Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.290704 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.290749 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92a6fa85-b69a-4779-bc6a-9320b011a278-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.290764 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4glx\" (UniqueName: \"kubernetes.io/projected/92a6fa85-b69a-4779-bc6a-9320b011a278-kube-api-access-f4glx\") on node \"crc\" DevicePath \"\"" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.292493 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-czltt"] Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.311778 5028 scope.go:117] "RemoveContainer" containerID="452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.327562 5028 scope.go:117] "RemoveContainer" containerID="dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad" Nov 23 08:12:31 crc kubenswrapper[5028]: E1123 08:12:31.328287 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad\": container with ID starting with dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad not found: ID does not exist" containerID="dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.328345 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad"} err="failed to get container status \"dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad\": rpc error: code = NotFound desc = could not find container \"dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad\": container with ID starting with dbca02dbc170320fad9adb9386063ecce9baa200df02b00052011f1655c726ad not found: ID does not exist" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.328370 5028 scope.go:117] "RemoveContainer" containerID="71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f" Nov 23 08:12:31 crc kubenswrapper[5028]: E1123 08:12:31.328870 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f\": container with ID starting with 71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f not found: ID does not exist" containerID="71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.328894 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f"} err="failed to get container status \"71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f\": rpc error: code = NotFound desc = could not find container \"71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f\": container with ID starting with 71d55de40e97455f94d8ba4fb9b71f9ac2ef37a6b3dff9294fce8ca5fda7314f not found: ID does not exist" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.328909 5028 scope.go:117] "RemoveContainer" containerID="452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625" Nov 23 08:12:31 crc kubenswrapper[5028]: E1123 08:12:31.329310 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625\": container with ID starting with 452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625 not found: ID does not exist" containerID="452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625" Nov 23 08:12:31 crc kubenswrapper[5028]: I1123 08:12:31.329329 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625"} err="failed to get container status \"452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625\": rpc error: code = NotFound desc = could not find container \"452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625\": container with ID starting with 452f09693a102a2f5dab19f341aca6df7e6c7000c3267f490ce931856ce7d625 not found: ID does not exist" Nov 23 08:12:33 crc kubenswrapper[5028]: I1123 08:12:33.074318 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" path="/var/lib/kubelet/pods/92a6fa85-b69a-4779-bc6a-9320b011a278/volumes" Nov 23 08:14:00 crc kubenswrapper[5028]: I1123 08:14:00.945999 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:14:00 crc kubenswrapper[5028]: I1123 08:14:00.946552 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:14:30 crc kubenswrapper[5028]: I1123 08:14:30.946584 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:14:30 crc kubenswrapper[5028]: I1123 08:14:30.947603 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.147033 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62"] Nov 23 08:15:00 crc kubenswrapper[5028]: E1123 08:15:00.149363 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="registry-server" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.149399 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="registry-server" Nov 23 08:15:00 crc kubenswrapper[5028]: E1123 08:15:00.149438 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="extract-content" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.149460 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="extract-content" Nov 23 08:15:00 crc kubenswrapper[5028]: E1123 08:15:00.149510 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="extract-utilities" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.149530 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="extract-utilities" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.149866 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a6fa85-b69a-4779-bc6a-9320b011a278" containerName="registry-server" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.150680 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.152780 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.152856 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.155348 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62"] Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.276033 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6975b8a2-9360-4d2d-bee0-fc44b3896b87-secret-volume\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.276233 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bj9\" (UniqueName: \"kubernetes.io/projected/6975b8a2-9360-4d2d-bee0-fc44b3896b87-kube-api-access-j5bj9\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.276262 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6975b8a2-9360-4d2d-bee0-fc44b3896b87-config-volume\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.377035 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5bj9\" (UniqueName: \"kubernetes.io/projected/6975b8a2-9360-4d2d-bee0-fc44b3896b87-kube-api-access-j5bj9\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.377073 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6975b8a2-9360-4d2d-bee0-fc44b3896b87-config-volume\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.377524 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6975b8a2-9360-4d2d-bee0-fc44b3896b87-secret-volume\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.378604 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6975b8a2-9360-4d2d-bee0-fc44b3896b87-config-volume\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.383791 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6975b8a2-9360-4d2d-bee0-fc44b3896b87-secret-volume\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.394684 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5bj9\" (UniqueName: \"kubernetes.io/projected/6975b8a2-9360-4d2d-bee0-fc44b3896b87-kube-api-access-j5bj9\") pod \"collect-profiles-29398095-mnp62\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.488471 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.698477 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62"] Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.946518 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.946571 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.946615 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.947190 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d48657536c29e96eeecb7aae863205d9b149f37d9d8f20eef6f30a5d9505c89b"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:15:00 crc kubenswrapper[5028]: I1123 08:15:00.947252 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://d48657536c29e96eeecb7aae863205d9b149f37d9d8f20eef6f30a5d9505c89b" gracePeriod=600 Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.501117 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="d48657536c29e96eeecb7aae863205d9b149f37d9d8f20eef6f30a5d9505c89b" exitCode=0 Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.501213 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"d48657536c29e96eeecb7aae863205d9b149f37d9d8f20eef6f30a5d9505c89b"} Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.502406 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a"} Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.502427 5028 scope.go:117] "RemoveContainer" containerID="92100caf73c8d30419f648195b027526ce1b4fbc70e6cc0d3ebf5d3d5eca1daa" Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.504084 5028 generic.go:334] "Generic (PLEG): container finished" podID="6975b8a2-9360-4d2d-bee0-fc44b3896b87" containerID="13ab57c4be751a6e5cab4e1f4b7be17b9bd8f3aae2180a669fed061fb0a53d18" exitCode=0 Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.504122 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" event={"ID":"6975b8a2-9360-4d2d-bee0-fc44b3896b87","Type":"ContainerDied","Data":"13ab57c4be751a6e5cab4e1f4b7be17b9bd8f3aae2180a669fed061fb0a53d18"} Nov 23 08:15:01 crc kubenswrapper[5028]: I1123 08:15:01.504182 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" event={"ID":"6975b8a2-9360-4d2d-bee0-fc44b3896b87","Type":"ContainerStarted","Data":"c0df5da0a2ac41526bb616629831a0e42667bd559c7d2f9fc9aad5b0c6caab30"} Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.870511 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.923568 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6975b8a2-9360-4d2d-bee0-fc44b3896b87-secret-volume\") pod \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.923638 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6975b8a2-9360-4d2d-bee0-fc44b3896b87-config-volume\") pod \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.923695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5bj9\" (UniqueName: \"kubernetes.io/projected/6975b8a2-9360-4d2d-bee0-fc44b3896b87-kube-api-access-j5bj9\") pod \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\" (UID: \"6975b8a2-9360-4d2d-bee0-fc44b3896b87\") " Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.925355 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6975b8a2-9360-4d2d-bee0-fc44b3896b87-config-volume" (OuterVolumeSpecName: "config-volume") pod "6975b8a2-9360-4d2d-bee0-fc44b3896b87" (UID: "6975b8a2-9360-4d2d-bee0-fc44b3896b87"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.929772 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6975b8a2-9360-4d2d-bee0-fc44b3896b87-kube-api-access-j5bj9" (OuterVolumeSpecName: "kube-api-access-j5bj9") pod "6975b8a2-9360-4d2d-bee0-fc44b3896b87" (UID: "6975b8a2-9360-4d2d-bee0-fc44b3896b87"). InnerVolumeSpecName "kube-api-access-j5bj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:15:02 crc kubenswrapper[5028]: I1123 08:15:02.930057 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6975b8a2-9360-4d2d-bee0-fc44b3896b87-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6975b8a2-9360-4d2d-bee0-fc44b3896b87" (UID: "6975b8a2-9360-4d2d-bee0-fc44b3896b87"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.025027 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5bj9\" (UniqueName: \"kubernetes.io/projected/6975b8a2-9360-4d2d-bee0-fc44b3896b87-kube-api-access-j5bj9\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.025056 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6975b8a2-9360-4d2d-bee0-fc44b3896b87-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.025067 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6975b8a2-9360-4d2d-bee0-fc44b3896b87-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.527268 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" event={"ID":"6975b8a2-9360-4d2d-bee0-fc44b3896b87","Type":"ContainerDied","Data":"c0df5da0a2ac41526bb616629831a0e42667bd559c7d2f9fc9aad5b0c6caab30"} Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.527308 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0df5da0a2ac41526bb616629831a0e42667bd559c7d2f9fc9aad5b0c6caab30" Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.527345 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62" Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.953016 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj"] Nov 23 08:15:03 crc kubenswrapper[5028]: I1123 08:15:03.958060 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398050-m6kdj"] Nov 23 08:15:05 crc kubenswrapper[5028]: I1123 08:15:05.068264 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2342d5-d133-44ae-957c-1a77cf088185" path="/var/lib/kubelet/pods/2d2342d5-d133-44ae-957c-1a77cf088185/volumes" Nov 23 08:15:24 crc kubenswrapper[5028]: I1123 08:15:24.205119 5028 scope.go:117] "RemoveContainer" containerID="c7c51d505ec8647d45a099c5fe1e422e5bcd9776e93ec88296d94de0425b2c86" Nov 23 08:17:30 crc kubenswrapper[5028]: I1123 08:17:30.946273 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:17:30 crc kubenswrapper[5028]: I1123 08:17:30.946931 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:00 crc kubenswrapper[5028]: I1123 08:18:00.946334 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:18:00 crc kubenswrapper[5028]: I1123 08:18:00.947025 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:30 crc kubenswrapper[5028]: I1123 08:18:30.946362 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:18:30 crc kubenswrapper[5028]: I1123 08:18:30.946943 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:18:30 crc kubenswrapper[5028]: I1123 08:18:30.947045 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:18:30 crc kubenswrapper[5028]: I1123 08:18:30.947856 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:18:30 crc kubenswrapper[5028]: I1123 08:18:30.947946 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" gracePeriod=600 Nov 23 08:18:31 crc kubenswrapper[5028]: E1123 08:18:31.084609 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:18:31 crc kubenswrapper[5028]: I1123 08:18:31.194660 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" exitCode=0 Nov 23 08:18:31 crc kubenswrapper[5028]: I1123 08:18:31.194730 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a"} Nov 23 08:18:31 crc kubenswrapper[5028]: I1123 08:18:31.194780 5028 scope.go:117] "RemoveContainer" containerID="d48657536c29e96eeecb7aae863205d9b149f37d9d8f20eef6f30a5d9505c89b" Nov 23 08:18:31 crc kubenswrapper[5028]: I1123 08:18:31.196016 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:18:31 crc kubenswrapper[5028]: E1123 08:18:31.196791 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:18:45 crc kubenswrapper[5028]: I1123 08:18:45.053275 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:18:45 crc kubenswrapper[5028]: E1123 08:18:45.054272 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:18:56 crc kubenswrapper[5028]: I1123 08:18:56.052888 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:18:56 crc kubenswrapper[5028]: E1123 08:18:56.053694 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:19:10 crc kubenswrapper[5028]: I1123 08:19:10.054025 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:19:10 crc kubenswrapper[5028]: E1123 08:19:10.054720 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:19:25 crc kubenswrapper[5028]: I1123 08:19:25.053841 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:19:25 crc kubenswrapper[5028]: E1123 08:19:25.054905 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:19:32 crc kubenswrapper[5028]: I1123 08:19:32.946462 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5v682"] Nov 23 08:19:32 crc kubenswrapper[5028]: E1123 08:19:32.947279 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6975b8a2-9360-4d2d-bee0-fc44b3896b87" containerName="collect-profiles" Nov 23 08:19:32 crc kubenswrapper[5028]: I1123 08:19:32.947293 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6975b8a2-9360-4d2d-bee0-fc44b3896b87" containerName="collect-profiles" Nov 23 08:19:32 crc kubenswrapper[5028]: I1123 08:19:32.947416 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6975b8a2-9360-4d2d-bee0-fc44b3896b87" containerName="collect-profiles" Nov 23 08:19:32 crc kubenswrapper[5028]: I1123 08:19:32.948925 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:32 crc kubenswrapper[5028]: I1123 08:19:32.976067 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5v682"] Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.061706 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-utilities\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.061767 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwndv\" (UniqueName: \"kubernetes.io/projected/3def08e4-9554-413e-a386-279098bc7964-kube-api-access-pwndv\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.062076 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-catalog-content\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.163341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-utilities\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.163619 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwndv\" (UniqueName: \"kubernetes.io/projected/3def08e4-9554-413e-a386-279098bc7964-kube-api-access-pwndv\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.163693 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-catalog-content\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.163934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-utilities\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.164174 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-catalog-content\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.183656 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwndv\" (UniqueName: \"kubernetes.io/projected/3def08e4-9554-413e-a386-279098bc7964-kube-api-access-pwndv\") pod \"redhat-operators-5v682\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.276782 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:33 crc kubenswrapper[5028]: I1123 08:19:33.754836 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5v682"] Nov 23 08:19:34 crc kubenswrapper[5028]: I1123 08:19:34.975229 5028 generic.go:334] "Generic (PLEG): container finished" podID="3def08e4-9554-413e-a386-279098bc7964" containerID="205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22" exitCode=0 Nov 23 08:19:34 crc kubenswrapper[5028]: I1123 08:19:34.975273 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerDied","Data":"205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22"} Nov 23 08:19:34 crc kubenswrapper[5028]: I1123 08:19:34.975297 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerStarted","Data":"2d9b9b2a17559d0ced62db8aca2d2a9d53aad190969bb063bc3dad0769471158"} Nov 23 08:19:34 crc kubenswrapper[5028]: I1123 08:19:34.978917 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:19:36 crc kubenswrapper[5028]: I1123 08:19:36.000920 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerStarted","Data":"2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69"} Nov 23 08:19:37 crc kubenswrapper[5028]: I1123 08:19:37.010735 5028 generic.go:334] "Generic (PLEG): container finished" podID="3def08e4-9554-413e-a386-279098bc7964" containerID="2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69" exitCode=0 Nov 23 08:19:37 crc kubenswrapper[5028]: I1123 08:19:37.011074 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerDied","Data":"2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69"} Nov 23 08:19:38 crc kubenswrapper[5028]: I1123 08:19:38.032682 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerStarted","Data":"d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf"} Nov 23 08:19:38 crc kubenswrapper[5028]: I1123 08:19:38.053472 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5v682" podStartSLOduration=3.5604617530000002 podStartE2EDuration="6.053456439s" podCreationTimestamp="2025-11-23 08:19:32 +0000 UTC" firstStartedPulling="2025-11-23 08:19:34.978614679 +0000 UTC m=+5358.676019468" lastFinishedPulling="2025-11-23 08:19:37.471609375 +0000 UTC m=+5361.169014154" observedRunningTime="2025-11-23 08:19:38.051389658 +0000 UTC m=+5361.748794437" watchObservedRunningTime="2025-11-23 08:19:38.053456439 +0000 UTC m=+5361.750861208" Nov 23 08:19:39 crc kubenswrapper[5028]: I1123 08:19:39.054062 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:19:39 crc kubenswrapper[5028]: E1123 08:19:39.054326 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:19:43 crc kubenswrapper[5028]: I1123 08:19:43.277546 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:43 crc kubenswrapper[5028]: I1123 08:19:43.277991 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:43 crc kubenswrapper[5028]: I1123 08:19:43.352146 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:44 crc kubenswrapper[5028]: I1123 08:19:44.124775 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:44 crc kubenswrapper[5028]: I1123 08:19:44.180792 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5v682"] Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.087368 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5v682" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="registry-server" containerID="cri-o://d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf" gracePeriod=2 Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.486915 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.510699 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwndv\" (UniqueName: \"kubernetes.io/projected/3def08e4-9554-413e-a386-279098bc7964-kube-api-access-pwndv\") pod \"3def08e4-9554-413e-a386-279098bc7964\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.510901 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-catalog-content\") pod \"3def08e4-9554-413e-a386-279098bc7964\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.510967 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-utilities\") pod \"3def08e4-9554-413e-a386-279098bc7964\" (UID: \"3def08e4-9554-413e-a386-279098bc7964\") " Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.513347 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-utilities" (OuterVolumeSpecName: "utilities") pod "3def08e4-9554-413e-a386-279098bc7964" (UID: "3def08e4-9554-413e-a386-279098bc7964"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.523363 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3def08e4-9554-413e-a386-279098bc7964-kube-api-access-pwndv" (OuterVolumeSpecName: "kube-api-access-pwndv") pod "3def08e4-9554-413e-a386-279098bc7964" (UID: "3def08e4-9554-413e-a386-279098bc7964"). InnerVolumeSpecName "kube-api-access-pwndv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.612612 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:46 crc kubenswrapper[5028]: I1123 08:19:46.612644 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwndv\" (UniqueName: \"kubernetes.io/projected/3def08e4-9554-413e-a386-279098bc7964-kube-api-access-pwndv\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.099439 5028 generic.go:334] "Generic (PLEG): container finished" podID="3def08e4-9554-413e-a386-279098bc7964" containerID="d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf" exitCode=0 Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.099480 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerDied","Data":"d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf"} Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.099523 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5v682" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.099537 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5v682" event={"ID":"3def08e4-9554-413e-a386-279098bc7964","Type":"ContainerDied","Data":"2d9b9b2a17559d0ced62db8aca2d2a9d53aad190969bb063bc3dad0769471158"} Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.099557 5028 scope.go:117] "RemoveContainer" containerID="d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.127382 5028 scope.go:117] "RemoveContainer" containerID="2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.153529 5028 scope.go:117] "RemoveContainer" containerID="205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.173890 5028 scope.go:117] "RemoveContainer" containerID="d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf" Nov 23 08:19:47 crc kubenswrapper[5028]: E1123 08:19:47.174483 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf\": container with ID starting with d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf not found: ID does not exist" containerID="d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.174559 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf"} err="failed to get container status \"d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf\": rpc error: code = NotFound desc = could not find container \"d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf\": container with ID starting with d3a298184096b834121e55e5156868070851ec620d7d854da44b4d1f7abf6bcf not found: ID does not exist" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.174596 5028 scope.go:117] "RemoveContainer" containerID="2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69" Nov 23 08:19:47 crc kubenswrapper[5028]: E1123 08:19:47.175220 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69\": container with ID starting with 2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69 not found: ID does not exist" containerID="2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.175243 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69"} err="failed to get container status \"2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69\": rpc error: code = NotFound desc = could not find container \"2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69\": container with ID starting with 2a723c4366ea12686b74f4c2c65947bc5246b153fc83b64b593efe57c2838f69 not found: ID does not exist" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.175258 5028 scope.go:117] "RemoveContainer" containerID="205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22" Nov 23 08:19:47 crc kubenswrapper[5028]: E1123 08:19:47.175784 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22\": container with ID starting with 205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22 not found: ID does not exist" containerID="205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22" Nov 23 08:19:47 crc kubenswrapper[5028]: I1123 08:19:47.175805 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22"} err="failed to get container status \"205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22\": rpc error: code = NotFound desc = could not find container \"205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22\": container with ID starting with 205e3ee91ee1ec6663853ebc42953e5ed772c3ffa64d6965bd51f8ea9fe47f22 not found: ID does not exist" Nov 23 08:19:49 crc kubenswrapper[5028]: I1123 08:19:49.587152 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3def08e4-9554-413e-a386-279098bc7964" (UID: "3def08e4-9554-413e-a386-279098bc7964"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:19:49 crc kubenswrapper[5028]: I1123 08:19:49.653933 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3def08e4-9554-413e-a386-279098bc7964-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:19:49 crc kubenswrapper[5028]: I1123 08:19:49.840529 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5v682"] Nov 23 08:19:49 crc kubenswrapper[5028]: I1123 08:19:49.846972 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5v682"] Nov 23 08:19:50 crc kubenswrapper[5028]: I1123 08:19:50.054049 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:19:50 crc kubenswrapper[5028]: E1123 08:19:50.054352 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:19:51 crc kubenswrapper[5028]: I1123 08:19:51.072555 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3def08e4-9554-413e-a386-279098bc7964" path="/var/lib/kubelet/pods/3def08e4-9554-413e-a386-279098bc7964/volumes" Nov 23 08:20:04 crc kubenswrapper[5028]: I1123 08:20:04.053543 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:20:04 crc kubenswrapper[5028]: E1123 08:20:04.055713 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.529684 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2ldhl"] Nov 23 08:20:07 crc kubenswrapper[5028]: E1123 08:20:07.530378 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="registry-server" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.530392 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="registry-server" Nov 23 08:20:07 crc kubenswrapper[5028]: E1123 08:20:07.530412 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="extract-utilities" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.530419 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="extract-utilities" Nov 23 08:20:07 crc kubenswrapper[5028]: E1123 08:20:07.530433 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="extract-content" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.530440 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="extract-content" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.530583 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3def08e4-9554-413e-a386-279098bc7964" containerName="registry-server" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.531712 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.549066 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2ldhl"] Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.731569 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6q42\" (UniqueName: \"kubernetes.io/projected/b229bd62-02fb-4237-9ec1-85abac9c2a01-kube-api-access-f6q42\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.731644 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-catalog-content\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.731667 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-utilities\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.833075 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-catalog-content\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.833142 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-utilities\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.833246 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6q42\" (UniqueName: \"kubernetes.io/projected/b229bd62-02fb-4237-9ec1-85abac9c2a01-kube-api-access-f6q42\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.833672 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-utilities\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.833809 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-catalog-content\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:07 crc kubenswrapper[5028]: I1123 08:20:07.858409 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6q42\" (UniqueName: \"kubernetes.io/projected/b229bd62-02fb-4237-9ec1-85abac9c2a01-kube-api-access-f6q42\") pod \"community-operators-2ldhl\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:08 crc kubenswrapper[5028]: I1123 08:20:08.155895 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:08 crc kubenswrapper[5028]: I1123 08:20:08.589267 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2ldhl"] Nov 23 08:20:08 crc kubenswrapper[5028]: I1123 08:20:08.925899 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc8l"] Nov 23 08:20:08 crc kubenswrapper[5028]: I1123 08:20:08.927405 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:08 crc kubenswrapper[5028]: I1123 08:20:08.935129 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc8l"] Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.049965 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-catalog-content\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.050022 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-utilities\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.050079 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlvz\" (UniqueName: \"kubernetes.io/projected/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-kube-api-access-tqlvz\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.151622 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlvz\" (UniqueName: \"kubernetes.io/projected/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-kube-api-access-tqlvz\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.151715 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-catalog-content\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.151749 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-utilities\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.152296 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-utilities\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.153732 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-catalog-content\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.172545 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlvz\" (UniqueName: \"kubernetes.io/projected/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-kube-api-access-tqlvz\") pod \"redhat-marketplace-lhc8l\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.247378 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.284104 5028 generic.go:334] "Generic (PLEG): container finished" podID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerID="289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e" exitCode=0 Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.284167 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerDied","Data":"289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e"} Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.284190 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerStarted","Data":"01d44e77e4c5c41a3c4c28c7482b5e9f7ff2f0d1c7676dad3bb472d570867c2b"} Nov 23 08:20:09 crc kubenswrapper[5028]: I1123 08:20:09.714608 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc8l"] Nov 23 08:20:09 crc kubenswrapper[5028]: W1123 08:20:09.718497 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5cf6ff4_7723_4c7f_82ea_4e9ba5e83c2e.slice/crio-cac3581d43c86c8834498f9f6345c025dcc13e2e46cc90ae271a2447c91b979e WatchSource:0}: Error finding container cac3581d43c86c8834498f9f6345c025dcc13e2e46cc90ae271a2447c91b979e: Status 404 returned error can't find the container with id cac3581d43c86c8834498f9f6345c025dcc13e2e46cc90ae271a2447c91b979e Nov 23 08:20:10 crc kubenswrapper[5028]: I1123 08:20:10.298712 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerID="9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e" exitCode=0 Nov 23 08:20:10 crc kubenswrapper[5028]: I1123 08:20:10.298782 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc8l" event={"ID":"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e","Type":"ContainerDied","Data":"9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e"} Nov 23 08:20:10 crc kubenswrapper[5028]: I1123 08:20:10.299101 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc8l" event={"ID":"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e","Type":"ContainerStarted","Data":"cac3581d43c86c8834498f9f6345c025dcc13e2e46cc90ae271a2447c91b979e"} Nov 23 08:20:10 crc kubenswrapper[5028]: I1123 08:20:10.303413 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerStarted","Data":"c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f"} Nov 23 08:20:11 crc kubenswrapper[5028]: I1123 08:20:11.314479 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerID="63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911" exitCode=0 Nov 23 08:20:11 crc kubenswrapper[5028]: I1123 08:20:11.314530 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc8l" event={"ID":"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e","Type":"ContainerDied","Data":"63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911"} Nov 23 08:20:11 crc kubenswrapper[5028]: I1123 08:20:11.321688 5028 generic.go:334] "Generic (PLEG): container finished" podID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerID="c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f" exitCode=0 Nov 23 08:20:11 crc kubenswrapper[5028]: I1123 08:20:11.321789 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerDied","Data":"c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f"} Nov 23 08:20:12 crc kubenswrapper[5028]: I1123 08:20:12.334069 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerStarted","Data":"dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70"} Nov 23 08:20:12 crc kubenswrapper[5028]: I1123 08:20:12.335695 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc8l" event={"ID":"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e","Type":"ContainerStarted","Data":"90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4"} Nov 23 08:20:12 crc kubenswrapper[5028]: I1123 08:20:12.353159 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2ldhl" podStartSLOduration=2.8914114189999998 podStartE2EDuration="5.353145101s" podCreationTimestamp="2025-11-23 08:20:07 +0000 UTC" firstStartedPulling="2025-11-23 08:20:09.286605355 +0000 UTC m=+5392.984010154" lastFinishedPulling="2025-11-23 08:20:11.748339057 +0000 UTC m=+5395.445743836" observedRunningTime="2025-11-23 08:20:12.349849281 +0000 UTC m=+5396.047254060" watchObservedRunningTime="2025-11-23 08:20:12.353145101 +0000 UTC m=+5396.050549880" Nov 23 08:20:12 crc kubenswrapper[5028]: I1123 08:20:12.367774 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lhc8l" podStartSLOduration=2.952733424 podStartE2EDuration="4.367753678s" podCreationTimestamp="2025-11-23 08:20:08 +0000 UTC" firstStartedPulling="2025-11-23 08:20:10.300327297 +0000 UTC m=+5393.997732076" lastFinishedPulling="2025-11-23 08:20:11.715347551 +0000 UTC m=+5395.412752330" observedRunningTime="2025-11-23 08:20:12.365561014 +0000 UTC m=+5396.062965793" watchObservedRunningTime="2025-11-23 08:20:12.367753678 +0000 UTC m=+5396.065158447" Nov 23 08:20:18 crc kubenswrapper[5028]: I1123 08:20:18.157000 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:18 crc kubenswrapper[5028]: I1123 08:20:18.157527 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:18 crc kubenswrapper[5028]: I1123 08:20:18.199741 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:18 crc kubenswrapper[5028]: I1123 08:20:18.425962 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:18 crc kubenswrapper[5028]: I1123 08:20:18.488852 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2ldhl"] Nov 23 08:20:19 crc kubenswrapper[5028]: I1123 08:20:19.052595 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:20:19 crc kubenswrapper[5028]: E1123 08:20:19.053010 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:20:19 crc kubenswrapper[5028]: I1123 08:20:19.247710 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:19 crc kubenswrapper[5028]: I1123 08:20:19.247763 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:19 crc kubenswrapper[5028]: I1123 08:20:19.284171 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:19 crc kubenswrapper[5028]: I1123 08:20:19.435353 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.392058 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2ldhl" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="registry-server" containerID="cri-o://dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70" gracePeriod=2 Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.763008 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.827675 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc8l"] Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.938433 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-utilities\") pod \"b229bd62-02fb-4237-9ec1-85abac9c2a01\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.938497 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-catalog-content\") pod \"b229bd62-02fb-4237-9ec1-85abac9c2a01\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.938562 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6q42\" (UniqueName: \"kubernetes.io/projected/b229bd62-02fb-4237-9ec1-85abac9c2a01-kube-api-access-f6q42\") pod \"b229bd62-02fb-4237-9ec1-85abac9c2a01\" (UID: \"b229bd62-02fb-4237-9ec1-85abac9c2a01\") " Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.939590 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-utilities" (OuterVolumeSpecName: "utilities") pod "b229bd62-02fb-4237-9ec1-85abac9c2a01" (UID: "b229bd62-02fb-4237-9ec1-85abac9c2a01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.944222 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b229bd62-02fb-4237-9ec1-85abac9c2a01-kube-api-access-f6q42" (OuterVolumeSpecName: "kube-api-access-f6q42") pod "b229bd62-02fb-4237-9ec1-85abac9c2a01" (UID: "b229bd62-02fb-4237-9ec1-85abac9c2a01"). InnerVolumeSpecName "kube-api-access-f6q42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:20 crc kubenswrapper[5028]: I1123 08:20:20.997996 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b229bd62-02fb-4237-9ec1-85abac9c2a01" (UID: "b229bd62-02fb-4237-9ec1-85abac9c2a01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.040060 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6q42\" (UniqueName: \"kubernetes.io/projected/b229bd62-02fb-4237-9ec1-85abac9c2a01-kube-api-access-f6q42\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.040102 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.040116 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b229bd62-02fb-4237-9ec1-85abac9c2a01-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.403207 5028 generic.go:334] "Generic (PLEG): container finished" podID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerID="dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70" exitCode=0 Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.403270 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2ldhl" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.403269 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerDied","Data":"dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70"} Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.403317 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2ldhl" event={"ID":"b229bd62-02fb-4237-9ec1-85abac9c2a01","Type":"ContainerDied","Data":"01d44e77e4c5c41a3c4c28c7482b5e9f7ff2f0d1c7676dad3bb472d570867c2b"} Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.403336 5028 scope.go:117] "RemoveContainer" containerID="dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.403504 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lhc8l" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="registry-server" containerID="cri-o://90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4" gracePeriod=2 Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.432515 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2ldhl"] Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.434175 5028 scope.go:117] "RemoveContainer" containerID="c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.442757 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2ldhl"] Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.459963 5028 scope.go:117] "RemoveContainer" containerID="289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.552014 5028 scope.go:117] "RemoveContainer" containerID="dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70" Nov 23 08:20:21 crc kubenswrapper[5028]: E1123 08:20:21.552581 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70\": container with ID starting with dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70 not found: ID does not exist" containerID="dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.552642 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70"} err="failed to get container status \"dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70\": rpc error: code = NotFound desc = could not find container \"dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70\": container with ID starting with dd7862b88ae07831de3437ba27da6be5fe21cf2c6f2e1b24db51fb8042f65c70 not found: ID does not exist" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.552673 5028 scope.go:117] "RemoveContainer" containerID="c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f" Nov 23 08:20:21 crc kubenswrapper[5028]: E1123 08:20:21.553162 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f\": container with ID starting with c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f not found: ID does not exist" containerID="c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.553196 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f"} err="failed to get container status \"c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f\": rpc error: code = NotFound desc = could not find container \"c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f\": container with ID starting with c45795e631288b09a8dd0b9c72e4853add029b0e60605ad9f5b0d80dc63f5e4f not found: ID does not exist" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.553220 5028 scope.go:117] "RemoveContainer" containerID="289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e" Nov 23 08:20:21 crc kubenswrapper[5028]: E1123 08:20:21.554247 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e\": container with ID starting with 289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e not found: ID does not exist" containerID="289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.554281 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e"} err="failed to get container status \"289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e\": rpc error: code = NotFound desc = could not find container \"289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e\": container with ID starting with 289d7dcaba14ea546d46bdc80062f901995feb11884c25cbe82959e3ec123d5e not found: ID does not exist" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.805804 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.950986 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-catalog-content\") pod \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.951129 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-utilities\") pod \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.951163 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlvz\" (UniqueName: \"kubernetes.io/projected/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-kube-api-access-tqlvz\") pod \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\" (UID: \"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e\") " Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.952807 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-utilities" (OuterVolumeSpecName: "utilities") pod "f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" (UID: "f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.956399 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-kube-api-access-tqlvz" (OuterVolumeSpecName: "kube-api-access-tqlvz") pod "f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" (UID: "f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e"). InnerVolumeSpecName "kube-api-access-tqlvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:20:21 crc kubenswrapper[5028]: I1123 08:20:21.968077 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" (UID: "f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.052467 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.052504 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlvz\" (UniqueName: \"kubernetes.io/projected/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-kube-api-access-tqlvz\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.052513 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.416348 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerID="90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4" exitCode=0 Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.416420 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lhc8l" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.416416 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc8l" event={"ID":"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e","Type":"ContainerDied","Data":"90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4"} Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.416490 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lhc8l" event={"ID":"f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e","Type":"ContainerDied","Data":"cac3581d43c86c8834498f9f6345c025dcc13e2e46cc90ae271a2447c91b979e"} Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.416520 5028 scope.go:117] "RemoveContainer" containerID="90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.445035 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc8l"] Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.446234 5028 scope.go:117] "RemoveContainer" containerID="63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.456076 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lhc8l"] Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.482580 5028 scope.go:117] "RemoveContainer" containerID="9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.508552 5028 scope.go:117] "RemoveContainer" containerID="90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4" Nov 23 08:20:22 crc kubenswrapper[5028]: E1123 08:20:22.509259 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4\": container with ID starting with 90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4 not found: ID does not exist" containerID="90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.509337 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4"} err="failed to get container status \"90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4\": rpc error: code = NotFound desc = could not find container \"90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4\": container with ID starting with 90b48494b22216d2cd2ab6d3fef6d7f0a66899d4d66eb515aa5504c11de2d7a4 not found: ID does not exist" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.509398 5028 scope.go:117] "RemoveContainer" containerID="63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911" Nov 23 08:20:22 crc kubenswrapper[5028]: E1123 08:20:22.509986 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911\": container with ID starting with 63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911 not found: ID does not exist" containerID="63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.510091 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911"} err="failed to get container status \"63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911\": rpc error: code = NotFound desc = could not find container \"63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911\": container with ID starting with 63014291ddd1f6f68fa0bd0e9533e3fa30d5e44e37302c6ef0ccd5eccaad2911 not found: ID does not exist" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.510171 5028 scope.go:117] "RemoveContainer" containerID="9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e" Nov 23 08:20:22 crc kubenswrapper[5028]: E1123 08:20:22.510619 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e\": container with ID starting with 9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e not found: ID does not exist" containerID="9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e" Nov 23 08:20:22 crc kubenswrapper[5028]: I1123 08:20:22.510658 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e"} err="failed to get container status \"9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e\": rpc error: code = NotFound desc = could not find container \"9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e\": container with ID starting with 9bb3c2b0595d06a478e94b6b2e9d2b22551f9fc2ab476541a0612ebc2da9e12e not found: ID does not exist" Nov 23 08:20:23 crc kubenswrapper[5028]: I1123 08:20:23.066156 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" path="/var/lib/kubelet/pods/b229bd62-02fb-4237-9ec1-85abac9c2a01/volumes" Nov 23 08:20:23 crc kubenswrapper[5028]: I1123 08:20:23.067111 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" path="/var/lib/kubelet/pods/f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e/volumes" Nov 23 08:20:33 crc kubenswrapper[5028]: I1123 08:20:33.053063 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:20:33 crc kubenswrapper[5028]: E1123 08:20:33.053695 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:20:47 crc kubenswrapper[5028]: I1123 08:20:47.056053 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:20:47 crc kubenswrapper[5028]: E1123 08:20:47.056697 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:20:59 crc kubenswrapper[5028]: I1123 08:20:59.052561 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:20:59 crc kubenswrapper[5028]: E1123 08:20:59.053396 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:21:10 crc kubenswrapper[5028]: I1123 08:21:10.053097 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:21:10 crc kubenswrapper[5028]: E1123 08:21:10.053630 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:21:24 crc kubenswrapper[5028]: I1123 08:21:24.052401 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:21:24 crc kubenswrapper[5028]: E1123 08:21:24.053005 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:21:36 crc kubenswrapper[5028]: I1123 08:21:36.053336 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:21:36 crc kubenswrapper[5028]: E1123 08:21:36.054089 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:21:51 crc kubenswrapper[5028]: I1123 08:21:51.053815 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:21:51 crc kubenswrapper[5028]: E1123 08:21:51.055091 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:22:04 crc kubenswrapper[5028]: I1123 08:22:04.052809 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:22:04 crc kubenswrapper[5028]: E1123 08:22:04.053616 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:22:18 crc kubenswrapper[5028]: I1123 08:22:18.052613 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:22:18 crc kubenswrapper[5028]: E1123 08:22:18.053395 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:22:32 crc kubenswrapper[5028]: I1123 08:22:32.052981 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:22:32 crc kubenswrapper[5028]: E1123 08:22:32.053786 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:22:44 crc kubenswrapper[5028]: I1123 08:22:44.052791 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:22:44 crc kubenswrapper[5028]: E1123 08:22:44.053767 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.926793 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffx8s"] Nov 23 08:22:52 crc kubenswrapper[5028]: E1123 08:22:52.927901 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="registry-server" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.927922 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="registry-server" Nov 23 08:22:52 crc kubenswrapper[5028]: E1123 08:22:52.927938 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="extract-utilities" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.927969 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="extract-utilities" Nov 23 08:22:52 crc kubenswrapper[5028]: E1123 08:22:52.927995 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="extract-content" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.928006 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="extract-content" Nov 23 08:22:52 crc kubenswrapper[5028]: E1123 08:22:52.928021 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="extract-utilities" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.928030 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="extract-utilities" Nov 23 08:22:52 crc kubenswrapper[5028]: E1123 08:22:52.928069 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="registry-server" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.928077 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="registry-server" Nov 23 08:22:52 crc kubenswrapper[5028]: E1123 08:22:52.928097 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="extract-content" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.928105 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="extract-content" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.928291 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b229bd62-02fb-4237-9ec1-85abac9c2a01" containerName="registry-server" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.928311 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5cf6ff4-7723-4c7f-82ea-4e9ba5e83c2e" containerName="registry-server" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.929630 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:52 crc kubenswrapper[5028]: I1123 08:22:52.937312 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffx8s"] Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.112354 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-utilities\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.112676 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-catalog-content\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.112855 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78z2c\" (UniqueName: \"kubernetes.io/projected/352081cc-561d-4ffc-bd83-23df8ac80e6a-kube-api-access-78z2c\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.213997 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78z2c\" (UniqueName: \"kubernetes.io/projected/352081cc-561d-4ffc-bd83-23df8ac80e6a-kube-api-access-78z2c\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.214084 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-utilities\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.214119 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-catalog-content\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.214526 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-catalog-content\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.214684 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-utilities\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.239220 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78z2c\" (UniqueName: \"kubernetes.io/projected/352081cc-561d-4ffc-bd83-23df8ac80e6a-kube-api-access-78z2c\") pod \"certified-operators-ffx8s\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.255718 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.576389 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffx8s"] Nov 23 08:22:53 crc kubenswrapper[5028]: I1123 08:22:53.595255 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffx8s" event={"ID":"352081cc-561d-4ffc-bd83-23df8ac80e6a","Type":"ContainerStarted","Data":"1689c73e85250dc9a11afb159c320d1e43b279a34864b79d6a25bd8d39f58a58"} Nov 23 08:22:54 crc kubenswrapper[5028]: I1123 08:22:54.606295 5028 generic.go:334] "Generic (PLEG): container finished" podID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerID="6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96" exitCode=0 Nov 23 08:22:54 crc kubenswrapper[5028]: I1123 08:22:54.606406 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffx8s" event={"ID":"352081cc-561d-4ffc-bd83-23df8ac80e6a","Type":"ContainerDied","Data":"6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96"} Nov 23 08:22:55 crc kubenswrapper[5028]: I1123 08:22:55.619178 5028 generic.go:334] "Generic (PLEG): container finished" podID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerID="7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3" exitCode=0 Nov 23 08:22:55 crc kubenswrapper[5028]: I1123 08:22:55.619238 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffx8s" event={"ID":"352081cc-561d-4ffc-bd83-23df8ac80e6a","Type":"ContainerDied","Data":"7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3"} Nov 23 08:22:56 crc kubenswrapper[5028]: I1123 08:22:56.052927 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:22:56 crc kubenswrapper[5028]: E1123 08:22:56.053199 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:22:56 crc kubenswrapper[5028]: I1123 08:22:56.627687 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffx8s" event={"ID":"352081cc-561d-4ffc-bd83-23df8ac80e6a","Type":"ContainerStarted","Data":"16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef"} Nov 23 08:22:56 crc kubenswrapper[5028]: I1123 08:22:56.646132 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffx8s" podStartSLOduration=3.251232855 podStartE2EDuration="4.646112628s" podCreationTimestamp="2025-11-23 08:22:52 +0000 UTC" firstStartedPulling="2025-11-23 08:22:54.607994053 +0000 UTC m=+5558.305398842" lastFinishedPulling="2025-11-23 08:22:56.002873836 +0000 UTC m=+5559.700278615" observedRunningTime="2025-11-23 08:22:56.64248033 +0000 UTC m=+5560.339885149" watchObservedRunningTime="2025-11-23 08:22:56.646112628 +0000 UTC m=+5560.343517417" Nov 23 08:23:03 crc kubenswrapper[5028]: I1123 08:23:03.256873 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:23:03 crc kubenswrapper[5028]: I1123 08:23:03.257378 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:23:03 crc kubenswrapper[5028]: I1123 08:23:03.299539 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:23:03 crc kubenswrapper[5028]: I1123 08:23:03.756322 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:23:03 crc kubenswrapper[5028]: I1123 08:23:03.806719 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffx8s"] Nov 23 08:23:05 crc kubenswrapper[5028]: I1123 08:23:05.701357 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ffx8s" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="registry-server" containerID="cri-o://16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef" gracePeriod=2 Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.639433 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.709978 5028 generic.go:334] "Generic (PLEG): container finished" podID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerID="16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef" exitCode=0 Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.710040 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffx8s" event={"ID":"352081cc-561d-4ffc-bd83-23df8ac80e6a","Type":"ContainerDied","Data":"16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef"} Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.710066 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffx8s" event={"ID":"352081cc-561d-4ffc-bd83-23df8ac80e6a","Type":"ContainerDied","Data":"1689c73e85250dc9a11afb159c320d1e43b279a34864b79d6a25bd8d39f58a58"} Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.710062 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffx8s" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.710102 5028 scope.go:117] "RemoveContainer" containerID="16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.719380 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78z2c\" (UniqueName: \"kubernetes.io/projected/352081cc-561d-4ffc-bd83-23df8ac80e6a-kube-api-access-78z2c\") pod \"352081cc-561d-4ffc-bd83-23df8ac80e6a\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.719797 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-catalog-content\") pod \"352081cc-561d-4ffc-bd83-23df8ac80e6a\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.719867 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-utilities\") pod \"352081cc-561d-4ffc-bd83-23df8ac80e6a\" (UID: \"352081cc-561d-4ffc-bd83-23df8ac80e6a\") " Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.720696 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-utilities" (OuterVolumeSpecName: "utilities") pod "352081cc-561d-4ffc-bd83-23df8ac80e6a" (UID: "352081cc-561d-4ffc-bd83-23df8ac80e6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.727562 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/352081cc-561d-4ffc-bd83-23df8ac80e6a-kube-api-access-78z2c" (OuterVolumeSpecName: "kube-api-access-78z2c") pod "352081cc-561d-4ffc-bd83-23df8ac80e6a" (UID: "352081cc-561d-4ffc-bd83-23df8ac80e6a"). InnerVolumeSpecName "kube-api-access-78z2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.730714 5028 scope.go:117] "RemoveContainer" containerID="7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.757395 5028 scope.go:117] "RemoveContainer" containerID="6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.762849 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "352081cc-561d-4ffc-bd83-23df8ac80e6a" (UID: "352081cc-561d-4ffc-bd83-23df8ac80e6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.777900 5028 scope.go:117] "RemoveContainer" containerID="16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef" Nov 23 08:23:06 crc kubenswrapper[5028]: E1123 08:23:06.778298 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef\": container with ID starting with 16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef not found: ID does not exist" containerID="16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.778328 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef"} err="failed to get container status \"16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef\": rpc error: code = NotFound desc = could not find container \"16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef\": container with ID starting with 16af3b1564092e6298a348bc65bee2140a45a9de48f89e3aa7c14b12eb4ac2ef not found: ID does not exist" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.778348 5028 scope.go:117] "RemoveContainer" containerID="7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3" Nov 23 08:23:06 crc kubenswrapper[5028]: E1123 08:23:06.778789 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3\": container with ID starting with 7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3 not found: ID does not exist" containerID="7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.778808 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3"} err="failed to get container status \"7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3\": rpc error: code = NotFound desc = could not find container \"7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3\": container with ID starting with 7773df3a184d27b30b79c68f70b758b01b3372f61e7ddc5f47b19bd3030a7af3 not found: ID does not exist" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.778821 5028 scope.go:117] "RemoveContainer" containerID="6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96" Nov 23 08:23:06 crc kubenswrapper[5028]: E1123 08:23:06.779285 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96\": container with ID starting with 6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96 not found: ID does not exist" containerID="6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.779322 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96"} err="failed to get container status \"6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96\": rpc error: code = NotFound desc = could not find container \"6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96\": container with ID starting with 6d7bde999b52c45fe57a269150cadd91f121539e431bbe0e76c57445d90b6c96 not found: ID does not exist" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.821698 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78z2c\" (UniqueName: \"kubernetes.io/projected/352081cc-561d-4ffc-bd83-23df8ac80e6a-kube-api-access-78z2c\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.821736 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:06 crc kubenswrapper[5028]: I1123 08:23:06.821784 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/352081cc-561d-4ffc-bd83-23df8ac80e6a-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:23:07 crc kubenswrapper[5028]: I1123 08:23:07.062187 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ffx8s"] Nov 23 08:23:07 crc kubenswrapper[5028]: I1123 08:23:07.063367 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ffx8s"] Nov 23 08:23:09 crc kubenswrapper[5028]: I1123 08:23:09.074835 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" path="/var/lib/kubelet/pods/352081cc-561d-4ffc-bd83-23df8ac80e6a/volumes" Nov 23 08:23:10 crc kubenswrapper[5028]: I1123 08:23:10.053640 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:23:10 crc kubenswrapper[5028]: E1123 08:23:10.054077 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:23:22 crc kubenswrapper[5028]: I1123 08:23:22.053386 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:23:22 crc kubenswrapper[5028]: E1123 08:23:22.054262 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:23:34 crc kubenswrapper[5028]: I1123 08:23:34.053518 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:23:34 crc kubenswrapper[5028]: I1123 08:23:34.919473 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"2348dae283de5d6de36acb1e4941d5d2419045a90922533d3a7b3763cf1cbc5e"} Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.731772 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-tnq4l"] Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.736939 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-tnq4l"] Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.899499 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-5g8n5"] Nov 23 08:24:07 crc kubenswrapper[5028]: E1123 08:24:07.899780 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="extract-content" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.899792 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="extract-content" Nov 23 08:24:07 crc kubenswrapper[5028]: E1123 08:24:07.899821 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="extract-utilities" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.899833 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="extract-utilities" Nov 23 08:24:07 crc kubenswrapper[5028]: E1123 08:24:07.899850 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="registry-server" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.899858 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="registry-server" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.900008 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="352081cc-561d-4ffc-bd83-23df8ac80e6a" containerName="registry-server" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.941418 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5g8n5"] Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.941531 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.944047 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.944442 5028 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-t9w2b" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.945644 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 23 08:24:07 crc kubenswrapper[5028]: I1123 08:24:07.946105 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.042788 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd31c495-04c8-4fa5-9f9e-c6898ee76091-node-mnt\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.043090 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqj62\" (UniqueName: \"kubernetes.io/projected/bd31c495-04c8-4fa5-9f9e-c6898ee76091-kube-api-access-pqj62\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.043292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd31c495-04c8-4fa5-9f9e-c6898ee76091-crc-storage\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.144386 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd31c495-04c8-4fa5-9f9e-c6898ee76091-crc-storage\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.144525 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd31c495-04c8-4fa5-9f9e-c6898ee76091-node-mnt\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.144648 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqj62\" (UniqueName: \"kubernetes.io/projected/bd31c495-04c8-4fa5-9f9e-c6898ee76091-kube-api-access-pqj62\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.145008 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd31c495-04c8-4fa5-9f9e-c6898ee76091-node-mnt\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.145216 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd31c495-04c8-4fa5-9f9e-c6898ee76091-crc-storage\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.166452 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqj62\" (UniqueName: \"kubernetes.io/projected/bd31c495-04c8-4fa5-9f9e-c6898ee76091-kube-api-access-pqj62\") pod \"crc-storage-crc-5g8n5\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.268145 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:08 crc kubenswrapper[5028]: I1123 08:24:08.693866 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5g8n5"] Nov 23 08:24:09 crc kubenswrapper[5028]: I1123 08:24:09.069535 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b93076-449f-40db-897d-e51e37113585" path="/var/lib/kubelet/pods/91b93076-449f-40db-897d-e51e37113585/volumes" Nov 23 08:24:09 crc kubenswrapper[5028]: I1123 08:24:09.198741 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5g8n5" event={"ID":"bd31c495-04c8-4fa5-9f9e-c6898ee76091","Type":"ContainerStarted","Data":"c985e986797af359637d0cfbe2f60b6606f46b98f473c53b658570fe87771b70"} Nov 23 08:24:10 crc kubenswrapper[5028]: I1123 08:24:10.210746 5028 generic.go:334] "Generic (PLEG): container finished" podID="bd31c495-04c8-4fa5-9f9e-c6898ee76091" containerID="a758ea1b07e92616b0fa449ee623a458cde3b234b1f04539092fb68b8718bdda" exitCode=0 Nov 23 08:24:10 crc kubenswrapper[5028]: I1123 08:24:10.210879 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5g8n5" event={"ID":"bd31c495-04c8-4fa5-9f9e-c6898ee76091","Type":"ContainerDied","Data":"a758ea1b07e92616b0fa449ee623a458cde3b234b1f04539092fb68b8718bdda"} Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.579435 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.696330 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqj62\" (UniqueName: \"kubernetes.io/projected/bd31c495-04c8-4fa5-9f9e-c6898ee76091-kube-api-access-pqj62\") pod \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.696429 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd31c495-04c8-4fa5-9f9e-c6898ee76091-node-mnt\") pod \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.696538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd31c495-04c8-4fa5-9f9e-c6898ee76091-crc-storage\") pod \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\" (UID: \"bd31c495-04c8-4fa5-9f9e-c6898ee76091\") " Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.697241 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd31c495-04c8-4fa5-9f9e-c6898ee76091-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "bd31c495-04c8-4fa5-9f9e-c6898ee76091" (UID: "bd31c495-04c8-4fa5-9f9e-c6898ee76091"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.711620 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd31c495-04c8-4fa5-9f9e-c6898ee76091-kube-api-access-pqj62" (OuterVolumeSpecName: "kube-api-access-pqj62") pod "bd31c495-04c8-4fa5-9f9e-c6898ee76091" (UID: "bd31c495-04c8-4fa5-9f9e-c6898ee76091"). InnerVolumeSpecName "kube-api-access-pqj62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.721105 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd31c495-04c8-4fa5-9f9e-c6898ee76091-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "bd31c495-04c8-4fa5-9f9e-c6898ee76091" (UID: "bd31c495-04c8-4fa5-9f9e-c6898ee76091"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.798147 5028 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/bd31c495-04c8-4fa5-9f9e-c6898ee76091-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.798177 5028 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/bd31c495-04c8-4fa5-9f9e-c6898ee76091-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:11 crc kubenswrapper[5028]: I1123 08:24:11.798188 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqj62\" (UniqueName: \"kubernetes.io/projected/bd31c495-04c8-4fa5-9f9e-c6898ee76091-kube-api-access-pqj62\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:12 crc kubenswrapper[5028]: I1123 08:24:12.228581 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5g8n5" event={"ID":"bd31c495-04c8-4fa5-9f9e-c6898ee76091","Type":"ContainerDied","Data":"c985e986797af359637d0cfbe2f60b6606f46b98f473c53b658570fe87771b70"} Nov 23 08:24:12 crc kubenswrapper[5028]: I1123 08:24:12.228863 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c985e986797af359637d0cfbe2f60b6606f46b98f473c53b658570fe87771b70" Nov 23 08:24:12 crc kubenswrapper[5028]: I1123 08:24:12.228646 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5g8n5" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.070577 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-5g8n5"] Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.076460 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-5g8n5"] Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.199170 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-blwm8"] Nov 23 08:24:14 crc kubenswrapper[5028]: E1123 08:24:14.199477 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd31c495-04c8-4fa5-9f9e-c6898ee76091" containerName="storage" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.199488 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd31c495-04c8-4fa5-9f9e-c6898ee76091" containerName="storage" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.199671 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd31c495-04c8-4fa5-9f9e-c6898ee76091" containerName="storage" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.200240 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.204141 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.204370 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.204112 5028 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-t9w2b" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.205181 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.219906 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-blwm8"] Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.340305 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/647e3397-f082-42e6-a6b8-a63f4750b1d1-crc-storage\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.340416 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/647e3397-f082-42e6-a6b8-a63f4750b1d1-node-mnt\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.340642 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czh8d\" (UniqueName: \"kubernetes.io/projected/647e3397-f082-42e6-a6b8-a63f4750b1d1-kube-api-access-czh8d\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.442297 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/647e3397-f082-42e6-a6b8-a63f4750b1d1-crc-storage\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.442804 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/647e3397-f082-42e6-a6b8-a63f4750b1d1-node-mnt\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.442867 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czh8d\" (UniqueName: \"kubernetes.io/projected/647e3397-f082-42e6-a6b8-a63f4750b1d1-kube-api-access-czh8d\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.443392 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/647e3397-f082-42e6-a6b8-a63f4750b1d1-node-mnt\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.443991 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/647e3397-f082-42e6-a6b8-a63f4750b1d1-crc-storage\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.464461 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czh8d\" (UniqueName: \"kubernetes.io/projected/647e3397-f082-42e6-a6b8-a63f4750b1d1-kube-api-access-czh8d\") pod \"crc-storage-crc-blwm8\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.526167 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:14 crc kubenswrapper[5028]: I1123 08:24:14.992940 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-blwm8"] Nov 23 08:24:15 crc kubenswrapper[5028]: I1123 08:24:15.061916 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd31c495-04c8-4fa5-9f9e-c6898ee76091" path="/var/lib/kubelet/pods/bd31c495-04c8-4fa5-9f9e-c6898ee76091/volumes" Nov 23 08:24:15 crc kubenswrapper[5028]: I1123 08:24:15.257115 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-blwm8" event={"ID":"647e3397-f082-42e6-a6b8-a63f4750b1d1","Type":"ContainerStarted","Data":"440ea17ad872b816495ce9089d00f449fc64f459ef4cd3e20b976e67a929dafb"} Nov 23 08:24:16 crc kubenswrapper[5028]: I1123 08:24:16.267317 5028 generic.go:334] "Generic (PLEG): container finished" podID="647e3397-f082-42e6-a6b8-a63f4750b1d1" containerID="ecda9c1809cfa56199ee744b19846adc0c663e288919be3541eaa4c92f17437d" exitCode=0 Nov 23 08:24:16 crc kubenswrapper[5028]: I1123 08:24:16.267375 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-blwm8" event={"ID":"647e3397-f082-42e6-a6b8-a63f4750b1d1","Type":"ContainerDied","Data":"ecda9c1809cfa56199ee744b19846adc0c663e288919be3541eaa4c92f17437d"} Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.531165 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.688666 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/647e3397-f082-42e6-a6b8-a63f4750b1d1-node-mnt\") pod \"647e3397-f082-42e6-a6b8-a63f4750b1d1\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.688736 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/647e3397-f082-42e6-a6b8-a63f4750b1d1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "647e3397-f082-42e6-a6b8-a63f4750b1d1" (UID: "647e3397-f082-42e6-a6b8-a63f4750b1d1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.688806 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czh8d\" (UniqueName: \"kubernetes.io/projected/647e3397-f082-42e6-a6b8-a63f4750b1d1-kube-api-access-czh8d\") pod \"647e3397-f082-42e6-a6b8-a63f4750b1d1\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.688841 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/647e3397-f082-42e6-a6b8-a63f4750b1d1-crc-storage\") pod \"647e3397-f082-42e6-a6b8-a63f4750b1d1\" (UID: \"647e3397-f082-42e6-a6b8-a63f4750b1d1\") " Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.689109 5028 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/647e3397-f082-42e6-a6b8-a63f4750b1d1-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.694186 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647e3397-f082-42e6-a6b8-a63f4750b1d1-kube-api-access-czh8d" (OuterVolumeSpecName: "kube-api-access-czh8d") pod "647e3397-f082-42e6-a6b8-a63f4750b1d1" (UID: "647e3397-f082-42e6-a6b8-a63f4750b1d1"). InnerVolumeSpecName "kube-api-access-czh8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.705627 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647e3397-f082-42e6-a6b8-a63f4750b1d1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "647e3397-f082-42e6-a6b8-a63f4750b1d1" (UID: "647e3397-f082-42e6-a6b8-a63f4750b1d1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.790727 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czh8d\" (UniqueName: \"kubernetes.io/projected/647e3397-f082-42e6-a6b8-a63f4750b1d1-kube-api-access-czh8d\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:17 crc kubenswrapper[5028]: I1123 08:24:17.790768 5028 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/647e3397-f082-42e6-a6b8-a63f4750b1d1-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 23 08:24:18 crc kubenswrapper[5028]: I1123 08:24:18.282769 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-blwm8" event={"ID":"647e3397-f082-42e6-a6b8-a63f4750b1d1","Type":"ContainerDied","Data":"440ea17ad872b816495ce9089d00f449fc64f459ef4cd3e20b976e67a929dafb"} Nov 23 08:24:18 crc kubenswrapper[5028]: I1123 08:24:18.282808 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440ea17ad872b816495ce9089d00f449fc64f459ef4cd3e20b976e67a929dafb" Nov 23 08:24:18 crc kubenswrapper[5028]: I1123 08:24:18.282872 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-blwm8" Nov 23 08:24:24 crc kubenswrapper[5028]: I1123 08:24:24.524508 5028 scope.go:117] "RemoveContainer" containerID="8357acbacaf85fadfd8a37d6b926c867ccb87c119194a8b6cac49175ac1bb44c" Nov 23 08:26:00 crc kubenswrapper[5028]: I1123 08:26:00.946844 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:26:00 crc kubenswrapper[5028]: I1123 08:26:00.950504 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.778120 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-dskf5"] Nov 23 08:26:21 crc kubenswrapper[5028]: E1123 08:26:21.779896 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647e3397-f082-42e6-a6b8-a63f4750b1d1" containerName="storage" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.780029 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="647e3397-f082-42e6-a6b8-a63f4750b1d1" containerName="storage" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.780280 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="647e3397-f082-42e6-a6b8-a63f4750b1d1" containerName="storage" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.781061 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.783369 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.783486 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.783835 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jc859" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.784003 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.784051 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.802513 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-dskf5"] Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.958410 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-dns-svc\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.958495 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-config\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.958539 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zw4\" (UniqueName: \"kubernetes.io/projected/3cf8083c-13af-4696-8f2d-c2c88fba6612-kube-api-access-r2zw4\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.975282 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-j8x7m"] Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.977116 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:21 crc kubenswrapper[5028]: I1123 08:26:21.996034 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-j8x7m"] Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.061986 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-dns-svc\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.062093 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-config\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.062146 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zw4\" (UniqueName: \"kubernetes.io/projected/3cf8083c-13af-4696-8f2d-c2c88fba6612-kube-api-access-r2zw4\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.063503 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-dns-svc\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.064439 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-config\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.088760 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zw4\" (UniqueName: \"kubernetes.io/projected/3cf8083c-13af-4696-8f2d-c2c88fba6612-kube-api-access-r2zw4\") pod \"dnsmasq-dns-6d6bd8b8c5-dskf5\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.106515 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.164890 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.164990 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99hk\" (UniqueName: \"kubernetes.io/projected/7c835b4d-ca9b-41ba-ae17-32f31e06e308-kube-api-access-j99hk\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.165046 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-dns-svc\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.265961 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99hk\" (UniqueName: \"kubernetes.io/projected/7c835b4d-ca9b-41ba-ae17-32f31e06e308-kube-api-access-j99hk\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.266447 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-dns-svc\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.266526 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.267590 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.267720 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-dns-svc\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.288162 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99hk\" (UniqueName: \"kubernetes.io/projected/7c835b4d-ca9b-41ba-ae17-32f31e06e308-kube-api-access-j99hk\") pod \"dnsmasq-dns-74bc88c489-j8x7m\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.322308 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.679938 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-dskf5"] Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.681708 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.873132 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.874656 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.876412 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.876751 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.876917 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5bzx6" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.877322 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.878772 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.892065 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.898212 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-j8x7m"] Nov 23 08:26:22 crc kubenswrapper[5028]: W1123 08:26:22.910013 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c835b4d_ca9b_41ba_ae17_32f31e06e308.slice/crio-4a5027798062b0043018b825d8496390bab1ae012830fd1d9b0e391eb90aa007 WatchSource:0}: Error finding container 4a5027798062b0043018b825d8496390bab1ae012830fd1d9b0e391eb90aa007: Status 404 returned error can't find the container with id 4a5027798062b0043018b825d8496390bab1ae012830fd1d9b0e391eb90aa007 Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.975963 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976024 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-server-conf\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976062 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976108 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24f7db37-7c9b-44b1-a4f3-91cc6c062866-pod-info\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976125 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976142 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9swmb\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-kube-api-access-9swmb\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976164 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976185 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24f7db37-7c9b-44b1-a4f3-91cc6c062866-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:22 crc kubenswrapper[5028]: I1123 08:26:22.976201 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077373 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077411 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9swmb\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-kube-api-access-9swmb\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077445 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077487 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24f7db37-7c9b-44b1-a4f3-91cc6c062866-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077512 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077561 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077611 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-server-conf\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.080065 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.078895 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.080173 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24f7db37-7c9b-44b1-a4f3-91cc6c062866-pod-info\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.079448 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.077889 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.079525 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-server-conf\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.083524 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24f7db37-7c9b-44b1-a4f3-91cc6c062866-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.083874 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24f7db37-7c9b-44b1-a4f3-91cc6c062866-pod-info\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.084076 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.086125 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.086425 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eae9037de8d4b3d34e167668cb0632ceec812926ae87f5644cf3c47db2ce1c9a/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.102222 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9swmb\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-kube-api-access-9swmb\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.129410 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.176079 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.177902 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.187090 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-f7kdz" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.187396 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.187479 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.187560 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.187792 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.189496 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.207223 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290249 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290317 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290349 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f31c576-0d52-4b02-a57c-209c966bf098-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290405 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290427 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290478 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290509 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwj99\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-kube-api-access-gwj99\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290550 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f31c576-0d52-4b02-a57c-209c966bf098-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.290573 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.394539 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.394597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwj99\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-kube-api-access-gwj99\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.394644 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f31c576-0d52-4b02-a57c-209c966bf098-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.394672 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.395284 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.395338 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.396726 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f31c576-0d52-4b02-a57c-209c966bf098-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.396817 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.396844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.399770 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" event={"ID":"3cf8083c-13af-4696-8f2d-c2c88fba6612","Type":"ContainerStarted","Data":"e107e73a837b71665db7016329c9afa7ffc573e6c82a58b6ff9760818213723e"} Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.400270 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.402914 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.405209 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.405492 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.408675 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f31c576-0d52-4b02-a57c-209c966bf098-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.412279 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.412317 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4537c005bb6617851fe6ce6eb6bb09c5dd1a8188110c50bc4d8bf57e47a4f0eb/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.412668 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.417995 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" event={"ID":"7c835b4d-ca9b-41ba-ae17-32f31e06e308","Type":"ContainerStarted","Data":"4a5027798062b0043018b825d8496390bab1ae012830fd1d9b0e391eb90aa007"} Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.421324 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f31c576-0d52-4b02-a57c-209c966bf098-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.444269 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwj99\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-kube-api-access-gwj99\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.449145 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.523484 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.791347 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:26:23 crc kubenswrapper[5028]: W1123 08:26:23.804331 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f7db37_7c9b_44b1_a4f3_91cc6c062866.slice/crio-15e4afb60a3fc21134043610a8f80564fa2be4b20224f36f5d10a30f52160513 WatchSource:0}: Error finding container 15e4afb60a3fc21134043610a8f80564fa2be4b20224f36f5d10a30f52160513: Status 404 returned error can't find the container with id 15e4afb60a3fc21134043610a8f80564fa2be4b20224f36f5d10a30f52160513 Nov 23 08:26:23 crc kubenswrapper[5028]: I1123 08:26:23.970937 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:26:23 crc kubenswrapper[5028]: W1123 08:26:23.979105 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f31c576_0d52_4b02_a57c_209c966bf098.slice/crio-8b3d95a7df694d0131c9c43e3221fc469517e2ffddde0f926661984092c81949 WatchSource:0}: Error finding container 8b3d95a7df694d0131c9c43e3221fc469517e2ffddde0f926661984092c81949: Status 404 returned error can't find the container with id 8b3d95a7df694d0131c9c43e3221fc469517e2ffddde0f926661984092c81949 Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.415978 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.418431 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.420421 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2rczc" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.422667 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.423026 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.423155 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.427823 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1f31c576-0d52-4b02-a57c-209c966bf098","Type":"ContainerStarted","Data":"8b3d95a7df694d0131c9c43e3221fc469517e2ffddde0f926661984092c81949"} Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.430600 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24f7db37-7c9b-44b1-a4f3-91cc6c062866","Type":"ContainerStarted","Data":"15e4afb60a3fc21134043610a8f80564fa2be4b20224f36f5d10a30f52160513"} Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.433408 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.449888 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520003 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520143 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520172 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520248 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520321 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-config-data-default\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520410 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520476 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpsnf\" (UniqueName: \"kubernetes.io/projected/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-kube-api-access-mpsnf\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.520513 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-kolla-config\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622203 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622381 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622433 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-config-data-default\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622514 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622538 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpsnf\" (UniqueName: \"kubernetes.io/projected/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-kube-api-access-mpsnf\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622557 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-kolla-config\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.622635 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.623770 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-config-data-default\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.623770 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.624494 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-kolla-config\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.624664 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.630733 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.642776 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.644901 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.644939 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f7c1047eff86dbd85fc5cdded61c481b2206ad695d78da0018354afe2904a320/globalmount\"" pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.651570 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpsnf\" (UniqueName: \"kubernetes.io/projected/80e72ccf-61ae-48ba-b4e8-4dbeab319ce7-kube-api-access-mpsnf\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.701245 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7bc37bc5-db9f-4cc4-b8a3-094695651158\") pod \"openstack-galera-0\" (UID: \"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7\") " pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.764985 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.823927 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.826350 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.828265 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-52xln" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.829234 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.832912 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.930464 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f454614a-030b-4c07-ac7e-633eb08e37b1-config-data\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.930515 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86p26\" (UniqueName: \"kubernetes.io/projected/f454614a-030b-4c07-ac7e-633eb08e37b1-kube-api-access-86p26\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:24 crc kubenswrapper[5028]: I1123 08:26:24.930698 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f454614a-030b-4c07-ac7e-633eb08e37b1-kolla-config\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.033184 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f454614a-030b-4c07-ac7e-633eb08e37b1-config-data\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.033562 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86p26\" (UniqueName: \"kubernetes.io/projected/f454614a-030b-4c07-ac7e-633eb08e37b1-kube-api-access-86p26\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.033614 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f454614a-030b-4c07-ac7e-633eb08e37b1-kolla-config\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.034094 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f454614a-030b-4c07-ac7e-633eb08e37b1-config-data\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.034337 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f454614a-030b-4c07-ac7e-633eb08e37b1-kolla-config\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.055278 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86p26\" (UniqueName: \"kubernetes.io/projected/f454614a-030b-4c07-ac7e-633eb08e37b1-kube-api-access-86p26\") pod \"memcached-0\" (UID: \"f454614a-030b-4c07-ac7e-633eb08e37b1\") " pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.165723 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.956993 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.967499 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.968937 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.978932 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.982224 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.982617 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nxd98" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.985726 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 23 08:26:25 crc kubenswrapper[5028]: I1123 08:26:25.990331 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.046717 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052103 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81da84ab-3ca1-4553-887f-8159d930cc0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052166 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-323ff7b6-5100-4768-9aef-364f73bbd374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-323ff7b6-5100-4768-9aef-364f73bbd374\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052204 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w97l\" (UniqueName: \"kubernetes.io/projected/81da84ab-3ca1-4553-887f-8159d930cc0f-kube-api-access-2w97l\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052245 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052276 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/81da84ab-3ca1-4553-887f-8159d930cc0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052300 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052445 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/81da84ab-3ca1-4553-887f-8159d930cc0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.052686 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.153975 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.154992 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.155065 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81da84ab-3ca1-4553-887f-8159d930cc0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.155128 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-323ff7b6-5100-4768-9aef-364f73bbd374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-323ff7b6-5100-4768-9aef-364f73bbd374\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.155911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w97l\" (UniqueName: \"kubernetes.io/projected/81da84ab-3ca1-4553-887f-8159d930cc0f-kube-api-access-2w97l\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.155992 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.156064 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/81da84ab-3ca1-4553-887f-8159d930cc0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.156102 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.156177 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/81da84ab-3ca1-4553-887f-8159d930cc0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.157399 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/81da84ab-3ca1-4553-887f-8159d930cc0f-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.157615 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.159078 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.159124 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-323ff7b6-5100-4768-9aef-364f73bbd374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-323ff7b6-5100-4768-9aef-364f73bbd374\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d00172498c8c2eee93ae0f6db024d41239d6c53ddf649aadf579c8dd8de0dd3/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.159280 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81da84ab-3ca1-4553-887f-8159d930cc0f-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.162564 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81da84ab-3ca1-4553-887f-8159d930cc0f-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.171350 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/81da84ab-3ca1-4553-887f-8159d930cc0f-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.173584 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w97l\" (UniqueName: \"kubernetes.io/projected/81da84ab-3ca1-4553-887f-8159d930cc0f-kube-api-access-2w97l\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.194032 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-323ff7b6-5100-4768-9aef-364f73bbd374\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-323ff7b6-5100-4768-9aef-364f73bbd374\") pod \"openstack-cell1-galera-0\" (UID: \"81da84ab-3ca1-4553-887f-8159d930cc0f\") " pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:26 crc kubenswrapper[5028]: I1123 08:26:26.291789 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:28 crc kubenswrapper[5028]: W1123 08:26:28.792898 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf454614a_030b_4c07_ac7e_633eb08e37b1.slice/crio-958ce07842eb5b36adc28dfc7c426d42aa8c357f642d84e493d211b083cfc52f WatchSource:0}: Error finding container 958ce07842eb5b36adc28dfc7c426d42aa8c357f642d84e493d211b083cfc52f: Status 404 returned error can't find the container with id 958ce07842eb5b36adc28dfc7c426d42aa8c357f642d84e493d211b083cfc52f Nov 23 08:26:29 crc kubenswrapper[5028]: I1123 08:26:29.468535 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7","Type":"ContainerStarted","Data":"bd3a87e0669a97bbc530c81f4d54d610f3c44a3f5a13248fc678abd07ab60cf3"} Nov 23 08:26:29 crc kubenswrapper[5028]: I1123 08:26:29.475333 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f454614a-030b-4c07-ac7e-633eb08e37b1","Type":"ContainerStarted","Data":"958ce07842eb5b36adc28dfc7c426d42aa8c357f642d84e493d211b083cfc52f"} Nov 23 08:26:30 crc kubenswrapper[5028]: I1123 08:26:30.946535 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:26:30 crc kubenswrapper[5028]: I1123 08:26:30.946896 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:26:36 crc kubenswrapper[5028]: I1123 08:26:36.726134 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 23 08:26:38 crc kubenswrapper[5028]: I1123 08:26:38.577093 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81da84ab-3ca1-4553-887f-8159d930cc0f","Type":"ContainerStarted","Data":"64ee0a161dcdbc53132d87d6d402aa0c603463f69c7a84c6819a65ada90d60dc"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.590528 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1f31c576-0d52-4b02-a57c-209c966bf098","Type":"ContainerStarted","Data":"f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.592638 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7","Type":"ContainerStarted","Data":"da5bd52a34060c57d82030d6f9893008be4b15751b06cdee72892b5b2adeb08b"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.594234 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24f7db37-7c9b-44b1-a4f3-91cc6c062866","Type":"ContainerStarted","Data":"44eb44f4d5ba2eb9dd4dcbee5af796ce2e2d813a9b5996e59778c8bdaec45ae2"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.595700 5028 generic.go:334] "Generic (PLEG): container finished" podID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerID="3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7" exitCode=0 Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.595764 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" event={"ID":"3cf8083c-13af-4696-8f2d-c2c88fba6612","Type":"ContainerDied","Data":"3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.597217 5028 generic.go:334] "Generic (PLEG): container finished" podID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerID="a752f1950966c72aa799602d29ed8178f4634f3c13326b149fa66c72494a6ee2" exitCode=0 Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.597265 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" event={"ID":"7c835b4d-ca9b-41ba-ae17-32f31e06e308","Type":"ContainerDied","Data":"a752f1950966c72aa799602d29ed8178f4634f3c13326b149fa66c72494a6ee2"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.598529 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f454614a-030b-4c07-ac7e-633eb08e37b1","Type":"ContainerStarted","Data":"d764a162c9da979e9830e6d7d71f2c91a0247b86f54de2c480af3465e4976296"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.598662 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.600184 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81da84ab-3ca1-4553-887f-8159d930cc0f","Type":"ContainerStarted","Data":"ebabda20659eba8d9a9ae3f9f997987a303f2064f2314ab36eba9b57344befc5"} Nov 23 08:26:40 crc kubenswrapper[5028]: I1123 08:26:40.640494 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=6.347621232 podStartE2EDuration="16.640477047s" podCreationTimestamp="2025-11-23 08:26:24 +0000 UTC" firstStartedPulling="2025-11-23 08:26:28.800533381 +0000 UTC m=+5772.497938160" lastFinishedPulling="2025-11-23 08:26:39.093389156 +0000 UTC m=+5782.790793975" observedRunningTime="2025-11-23 08:26:40.633612749 +0000 UTC m=+5784.331017548" watchObservedRunningTime="2025-11-23 08:26:40.640477047 +0000 UTC m=+5784.337881826" Nov 23 08:26:41 crc kubenswrapper[5028]: I1123 08:26:41.611850 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" event={"ID":"3cf8083c-13af-4696-8f2d-c2c88fba6612","Type":"ContainerStarted","Data":"62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215"} Nov 23 08:26:41 crc kubenswrapper[5028]: I1123 08:26:41.612600 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:41 crc kubenswrapper[5028]: I1123 08:26:41.615642 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" event={"ID":"7c835b4d-ca9b-41ba-ae17-32f31e06e308","Type":"ContainerStarted","Data":"8b04e9efa6a459eda72a77518e5451426073757d030df9355be503e349b70740"} Nov 23 08:26:41 crc kubenswrapper[5028]: I1123 08:26:41.615686 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:41 crc kubenswrapper[5028]: I1123 08:26:41.634257 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" podStartSLOduration=4.273871274 podStartE2EDuration="20.634236251s" podCreationTimestamp="2025-11-23 08:26:21 +0000 UTC" firstStartedPulling="2025-11-23 08:26:22.68149349 +0000 UTC m=+5766.378898269" lastFinishedPulling="2025-11-23 08:26:39.041858307 +0000 UTC m=+5782.739263246" observedRunningTime="2025-11-23 08:26:41.631694199 +0000 UTC m=+5785.329098998" watchObservedRunningTime="2025-11-23 08:26:41.634236251 +0000 UTC m=+5785.331641040" Nov 23 08:26:41 crc kubenswrapper[5028]: I1123 08:26:41.653081 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" podStartSLOduration=4.502515999 podStartE2EDuration="20.65303752s" podCreationTimestamp="2025-11-23 08:26:21 +0000 UTC" firstStartedPulling="2025-11-23 08:26:22.912628356 +0000 UTC m=+5766.610033135" lastFinishedPulling="2025-11-23 08:26:39.063149857 +0000 UTC m=+5782.760554656" observedRunningTime="2025-11-23 08:26:41.648024898 +0000 UTC m=+5785.345429697" watchObservedRunningTime="2025-11-23 08:26:41.65303752 +0000 UTC m=+5785.350442329" Nov 23 08:26:43 crc kubenswrapper[5028]: I1123 08:26:43.632838 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81da84ab-3ca1-4553-887f-8159d930cc0f","Type":"ContainerDied","Data":"ebabda20659eba8d9a9ae3f9f997987a303f2064f2314ab36eba9b57344befc5"} Nov 23 08:26:43 crc kubenswrapper[5028]: I1123 08:26:43.633032 5028 generic.go:334] "Generic (PLEG): container finished" podID="81da84ab-3ca1-4553-887f-8159d930cc0f" containerID="ebabda20659eba8d9a9ae3f9f997987a303f2064f2314ab36eba9b57344befc5" exitCode=0 Nov 23 08:26:43 crc kubenswrapper[5028]: I1123 08:26:43.637317 5028 generic.go:334] "Generic (PLEG): container finished" podID="80e72ccf-61ae-48ba-b4e8-4dbeab319ce7" containerID="da5bd52a34060c57d82030d6f9893008be4b15751b06cdee72892b5b2adeb08b" exitCode=0 Nov 23 08:26:43 crc kubenswrapper[5028]: I1123 08:26:43.637396 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7","Type":"ContainerDied","Data":"da5bd52a34060c57d82030d6f9893008be4b15751b06cdee72892b5b2adeb08b"} Nov 23 08:26:44 crc kubenswrapper[5028]: I1123 08:26:44.653913 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"81da84ab-3ca1-4553-887f-8159d930cc0f","Type":"ContainerStarted","Data":"aa79aec1279bbd41ba0c1a664aaa036d2945649ef62f9c43b5deea2f04b5d65b"} Nov 23 08:26:44 crc kubenswrapper[5028]: I1123 08:26:44.655625 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"80e72ccf-61ae-48ba-b4e8-4dbeab319ce7","Type":"ContainerStarted","Data":"e288905e1e0392e1972eecf0d2b145f51fa7a0e27e231c71171f29b637eb77b7"} Nov 23 08:26:44 crc kubenswrapper[5028]: I1123 08:26:44.691506 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=19.875345414999998 podStartE2EDuration="20.691476871s" podCreationTimestamp="2025-11-23 08:26:24 +0000 UTC" firstStartedPulling="2025-11-23 08:26:38.278570122 +0000 UTC m=+5781.975974901" lastFinishedPulling="2025-11-23 08:26:39.094701548 +0000 UTC m=+5782.792106357" observedRunningTime="2025-11-23 08:26:44.685470695 +0000 UTC m=+5788.382875534" watchObservedRunningTime="2025-11-23 08:26:44.691476871 +0000 UTC m=+5788.388881680" Nov 23 08:26:44 crc kubenswrapper[5028]: I1123 08:26:44.710711 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.417232672 podStartE2EDuration="21.710686591s" podCreationTimestamp="2025-11-23 08:26:23 +0000 UTC" firstStartedPulling="2025-11-23 08:26:28.800940701 +0000 UTC m=+5772.498345480" lastFinishedPulling="2025-11-23 08:26:39.09439462 +0000 UTC m=+5782.791799399" observedRunningTime="2025-11-23 08:26:44.706612541 +0000 UTC m=+5788.404017390" watchObservedRunningTime="2025-11-23 08:26:44.710686591 +0000 UTC m=+5788.408091400" Nov 23 08:26:44 crc kubenswrapper[5028]: I1123 08:26:44.765807 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 23 08:26:44 crc kubenswrapper[5028]: I1123 08:26:44.765892 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 23 08:26:45 crc kubenswrapper[5028]: I1123 08:26:45.168386 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 23 08:26:46 crc kubenswrapper[5028]: I1123 08:26:46.293099 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:46 crc kubenswrapper[5028]: I1123 08:26:46.293809 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:47 crc kubenswrapper[5028]: I1123 08:26:47.108631 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:47 crc kubenswrapper[5028]: I1123 08:26:47.323976 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:26:47 crc kubenswrapper[5028]: I1123 08:26:47.386416 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-dskf5"] Nov 23 08:26:47 crc kubenswrapper[5028]: I1123 08:26:47.677621 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerName="dnsmasq-dns" containerID="cri-o://62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215" gracePeriod=10 Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.090826 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.260111 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2zw4\" (UniqueName: \"kubernetes.io/projected/3cf8083c-13af-4696-8f2d-c2c88fba6612-kube-api-access-r2zw4\") pod \"3cf8083c-13af-4696-8f2d-c2c88fba6612\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.260266 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-config\") pod \"3cf8083c-13af-4696-8f2d-c2c88fba6612\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.260306 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-dns-svc\") pod \"3cf8083c-13af-4696-8f2d-c2c88fba6612\" (UID: \"3cf8083c-13af-4696-8f2d-c2c88fba6612\") " Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.266140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf8083c-13af-4696-8f2d-c2c88fba6612-kube-api-access-r2zw4" (OuterVolumeSpecName: "kube-api-access-r2zw4") pod "3cf8083c-13af-4696-8f2d-c2c88fba6612" (UID: "3cf8083c-13af-4696-8f2d-c2c88fba6612"). InnerVolumeSpecName "kube-api-access-r2zw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.298487 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-config" (OuterVolumeSpecName: "config") pod "3cf8083c-13af-4696-8f2d-c2c88fba6612" (UID: "3cf8083c-13af-4696-8f2d-c2c88fba6612"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.298567 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cf8083c-13af-4696-8f2d-c2c88fba6612" (UID: "3cf8083c-13af-4696-8f2d-c2c88fba6612"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.362591 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2zw4\" (UniqueName: \"kubernetes.io/projected/3cf8083c-13af-4696-8f2d-c2c88fba6612-kube-api-access-r2zw4\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.362621 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.362631 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cf8083c-13af-4696-8f2d-c2c88fba6612-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.387078 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.457632 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.688183 5028 generic.go:334] "Generic (PLEG): container finished" podID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerID="62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215" exitCode=0 Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.688243 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.688264 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" event={"ID":"3cf8083c-13af-4696-8f2d-c2c88fba6612","Type":"ContainerDied","Data":"62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215"} Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.688573 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d6bd8b8c5-dskf5" event={"ID":"3cf8083c-13af-4696-8f2d-c2c88fba6612","Type":"ContainerDied","Data":"e107e73a837b71665db7016329c9afa7ffc573e6c82a58b6ff9760818213723e"} Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.688610 5028 scope.go:117] "RemoveContainer" containerID="62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.709240 5028 scope.go:117] "RemoveContainer" containerID="3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.731179 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-dskf5"] Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.754314 5028 scope.go:117] "RemoveContainer" containerID="62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215" Nov 23 08:26:48 crc kubenswrapper[5028]: E1123 08:26:48.754924 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215\": container with ID starting with 62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215 not found: ID does not exist" containerID="62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.755009 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215"} err="failed to get container status \"62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215\": rpc error: code = NotFound desc = could not find container \"62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215\": container with ID starting with 62255964a122a069e402f67ed69d26c22107c0296d306ce1697c47b1da8ed215 not found: ID does not exist" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.755045 5028 scope.go:117] "RemoveContainer" containerID="3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7" Nov 23 08:26:48 crc kubenswrapper[5028]: E1123 08:26:48.755437 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7\": container with ID starting with 3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7 not found: ID does not exist" containerID="3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.755527 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7"} err="failed to get container status \"3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7\": rpc error: code = NotFound desc = could not find container \"3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7\": container with ID starting with 3b6d4b2d2e0346ac983beac71f94dd8efda169bf96c1f20482733f3abc0e93f7 not found: ID does not exist" Nov 23 08:26:48 crc kubenswrapper[5028]: I1123 08:26:48.756539 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d6bd8b8c5-dskf5"] Nov 23 08:26:49 crc kubenswrapper[5028]: I1123 08:26:49.066911 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" path="/var/lib/kubelet/pods/3cf8083c-13af-4696-8f2d-c2c88fba6612/volumes" Nov 23 08:26:50 crc kubenswrapper[5028]: I1123 08:26:50.874846 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 23 08:26:50 crc kubenswrapper[5028]: I1123 08:26:50.948412 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 23 08:27:00 crc kubenswrapper[5028]: I1123 08:27:00.946326 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:27:00 crc kubenswrapper[5028]: I1123 08:27:00.947341 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:27:00 crc kubenswrapper[5028]: I1123 08:27:00.947457 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:27:00 crc kubenswrapper[5028]: I1123 08:27:00.948562 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2348dae283de5d6de36acb1e4941d5d2419045a90922533d3a7b3763cf1cbc5e"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:27:00 crc kubenswrapper[5028]: I1123 08:27:00.948689 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://2348dae283de5d6de36acb1e4941d5d2419045a90922533d3a7b3763cf1cbc5e" gracePeriod=600 Nov 23 08:27:01 crc kubenswrapper[5028]: I1123 08:27:01.805533 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="2348dae283de5d6de36acb1e4941d5d2419045a90922533d3a7b3763cf1cbc5e" exitCode=0 Nov 23 08:27:01 crc kubenswrapper[5028]: I1123 08:27:01.805621 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"2348dae283de5d6de36acb1e4941d5d2419045a90922533d3a7b3763cf1cbc5e"} Nov 23 08:27:01 crc kubenswrapper[5028]: I1123 08:27:01.806268 5028 scope.go:117] "RemoveContainer" containerID="9618ba10ebcff555febc5df0841f84e6c160d66316bfaec1acfb0b4f4c14e77a" Nov 23 08:27:03 crc kubenswrapper[5028]: I1123 08:27:03.833783 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423"} Nov 23 08:27:12 crc kubenswrapper[5028]: I1123 08:27:12.939293 5028 generic.go:334] "Generic (PLEG): container finished" podID="1f31c576-0d52-4b02-a57c-209c966bf098" containerID="f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054" exitCode=0 Nov 23 08:27:12 crc kubenswrapper[5028]: I1123 08:27:12.939426 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1f31c576-0d52-4b02-a57c-209c966bf098","Type":"ContainerDied","Data":"f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054"} Nov 23 08:27:12 crc kubenswrapper[5028]: I1123 08:27:12.943209 5028 generic.go:334] "Generic (PLEG): container finished" podID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerID="44eb44f4d5ba2eb9dd4dcbee5af796ce2e2d813a9b5996e59778c8bdaec45ae2" exitCode=0 Nov 23 08:27:12 crc kubenswrapper[5028]: I1123 08:27:12.943277 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24f7db37-7c9b-44b1-a4f3-91cc6c062866","Type":"ContainerDied","Data":"44eb44f4d5ba2eb9dd4dcbee5af796ce2e2d813a9b5996e59778c8bdaec45ae2"} Nov 23 08:27:13 crc kubenswrapper[5028]: I1123 08:27:13.952254 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1f31c576-0d52-4b02-a57c-209c966bf098","Type":"ContainerStarted","Data":"9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53"} Nov 23 08:27:13 crc kubenswrapper[5028]: I1123 08:27:13.953120 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:13 crc kubenswrapper[5028]: I1123 08:27:13.954313 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24f7db37-7c9b-44b1-a4f3-91cc6c062866","Type":"ContainerStarted","Data":"9952e62c8698fc9f438aceb97c49c85345e06e140ba3ac46ea3cae50d3a818a9"} Nov 23 08:27:13 crc kubenswrapper[5028]: I1123 08:27:13.954464 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 08:27:13 crc kubenswrapper[5028]: I1123 08:27:13.981470 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.66778831 podStartE2EDuration="51.981431817s" podCreationTimestamp="2025-11-23 08:26:22 +0000 UTC" firstStartedPulling="2025-11-23 08:26:23.982464049 +0000 UTC m=+5767.679868818" lastFinishedPulling="2025-11-23 08:26:36.296107546 +0000 UTC m=+5779.993512325" observedRunningTime="2025-11-23 08:27:13.972403576 +0000 UTC m=+5817.669808375" watchObservedRunningTime="2025-11-23 08:27:13.981431817 +0000 UTC m=+5817.678836616" Nov 23 08:27:14 crc kubenswrapper[5028]: I1123 08:27:14.005992 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.508408947 podStartE2EDuration="53.005969007s" podCreationTimestamp="2025-11-23 08:26:21 +0000 UTC" firstStartedPulling="2025-11-23 08:26:23.806031559 +0000 UTC m=+5767.503436338" lastFinishedPulling="2025-11-23 08:26:36.303591619 +0000 UTC m=+5780.000996398" observedRunningTime="2025-11-23 08:27:14.000703638 +0000 UTC m=+5817.698108427" watchObservedRunningTime="2025-11-23 08:27:14.005969007 +0000 UTC m=+5817.703373786" Nov 23 08:27:23 crc kubenswrapper[5028]: I1123 08:27:23.210725 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 08:27:23 crc kubenswrapper[5028]: I1123 08:27:23.527250 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.294067 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-97464f77-bbg8w"] Nov 23 08:27:29 crc kubenswrapper[5028]: E1123 08:27:29.294791 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerName="dnsmasq-dns" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.294803 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerName="dnsmasq-dns" Nov 23 08:27:29 crc kubenswrapper[5028]: E1123 08:27:29.294820 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerName="init" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.294826 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerName="init" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.294982 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf8083c-13af-4696-8f2d-c2c88fba6612" containerName="dnsmasq-dns" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.295797 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.308614 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-97464f77-bbg8w"] Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.313983 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-dns-svc\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.315368 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxdw4\" (UniqueName: \"kubernetes.io/projected/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-kube-api-access-nxdw4\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.315545 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-config\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.417347 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxdw4\" (UniqueName: \"kubernetes.io/projected/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-kube-api-access-nxdw4\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.417753 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-config\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.417926 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-dns-svc\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.418764 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-config\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.419062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-dns-svc\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.447124 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxdw4\" (UniqueName: \"kubernetes.io/projected/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-kube-api-access-nxdw4\") pod \"dnsmasq-dns-97464f77-bbg8w\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:29 crc kubenswrapper[5028]: I1123 08:27:29.619808 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:30 crc kubenswrapper[5028]: I1123 08:27:30.012762 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:27:30 crc kubenswrapper[5028]: I1123 08:27:30.034152 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-97464f77-bbg8w"] Nov 23 08:27:30 crc kubenswrapper[5028]: I1123 08:27:30.087925 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-bbg8w" event={"ID":"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1","Type":"ContainerStarted","Data":"7262beffe2fed47919296086399ebbc48ef6a788949a92018b0c85306f74bf28"} Nov 23 08:27:30 crc kubenswrapper[5028]: I1123 08:27:30.662311 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:27:31 crc kubenswrapper[5028]: I1123 08:27:31.096543 5028 generic.go:334] "Generic (PLEG): container finished" podID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerID="bdfb3715d97016fb532bb718b7fc1c32fe4c02ba2a7a86266b6d3a6d9f8bac43" exitCode=0 Nov 23 08:27:31 crc kubenswrapper[5028]: I1123 08:27:31.096621 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-bbg8w" event={"ID":"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1","Type":"ContainerDied","Data":"bdfb3715d97016fb532bb718b7fc1c32fe4c02ba2a7a86266b6d3a6d9f8bac43"} Nov 23 08:27:31 crc kubenswrapper[5028]: I1123 08:27:31.744941 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="rabbitmq" containerID="cri-o://9952e62c8698fc9f438aceb97c49c85345e06e140ba3ac46ea3cae50d3a818a9" gracePeriod=604799 Nov 23 08:27:32 crc kubenswrapper[5028]: I1123 08:27:32.104091 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-bbg8w" event={"ID":"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1","Type":"ContainerStarted","Data":"66673be3ae205d3ec01a1b153e3be50a726b37c86bbc27b3f6c57ce8a0a3138d"} Nov 23 08:27:32 crc kubenswrapper[5028]: I1123 08:27:32.104248 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:32 crc kubenswrapper[5028]: I1123 08:27:32.122649 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-97464f77-bbg8w" podStartSLOduration=3.122634518 podStartE2EDuration="3.122634518s" podCreationTimestamp="2025-11-23 08:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:27:32.121041919 +0000 UTC m=+5835.818446698" watchObservedRunningTime="2025-11-23 08:27:32.122634518 +0000 UTC m=+5835.820039297" Nov 23 08:27:32 crc kubenswrapper[5028]: I1123 08:27:32.493534 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="rabbitmq" containerID="cri-o://9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53" gracePeriod=604799 Nov 23 08:27:33 crc kubenswrapper[5028]: I1123 08:27:33.364862 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.247:5672: connect: connection refused" Nov 23 08:27:33 crc kubenswrapper[5028]: I1123 08:27:33.524899 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.248:5672: connect: connection refused" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.155524 5028 generic.go:334] "Generic (PLEG): container finished" podID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerID="9952e62c8698fc9f438aceb97c49c85345e06e140ba3ac46ea3cae50d3a818a9" exitCode=0 Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.155602 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24f7db37-7c9b-44b1-a4f3-91cc6c062866","Type":"ContainerDied","Data":"9952e62c8698fc9f438aceb97c49c85345e06e140ba3ac46ea3cae50d3a818a9"} Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.299874 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468279 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468390 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-erlang-cookie\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468445 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-plugins-conf\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468475 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9swmb\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-kube-api-access-9swmb\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468495 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24f7db37-7c9b-44b1-a4f3-91cc6c062866-pod-info\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468563 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-confd\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468606 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-plugins\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468642 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-server-conf\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.468663 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24f7db37-7c9b-44b1-a4f3-91cc6c062866-erlang-cookie-secret\") pod \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\" (UID: \"24f7db37-7c9b-44b1-a4f3-91cc6c062866\") " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.469256 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.469305 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.470155 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.475277 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/24f7db37-7c9b-44b1-a4f3-91cc6c062866-pod-info" (OuterVolumeSpecName: "pod-info") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.478479 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-kube-api-access-9swmb" (OuterVolumeSpecName: "kube-api-access-9swmb") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "kube-api-access-9swmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.486821 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228" (OuterVolumeSpecName: "persistence") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "pvc-c130abc4-626e-4949-b399-88a290bc5228". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.492748 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24f7db37-7c9b-44b1-a4f3-91cc6c062866-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.499110 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-server-conf" (OuterVolumeSpecName: "server-conf") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.551164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "24f7db37-7c9b-44b1-a4f3-91cc6c062866" (UID: "24f7db37-7c9b-44b1-a4f3-91cc6c062866"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.570870 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") on node \"crc\" " Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.570943 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.570983 5028 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.571001 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9swmb\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-kube-api-access-9swmb\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.571015 5028 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/24f7db37-7c9b-44b1-a4f3-91cc6c062866-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.571026 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.571040 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/24f7db37-7c9b-44b1-a4f3-91cc6c062866-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.571055 5028 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/24f7db37-7c9b-44b1-a4f3-91cc6c062866-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.571067 5028 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/24f7db37-7c9b-44b1-a4f3-91cc6c062866-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.596303 5028 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.596788 5028 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c130abc4-626e-4949-b399-88a290bc5228" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228") on node "crc" Nov 23 08:27:38 crc kubenswrapper[5028]: I1123 08:27:38.671734 5028 reconciler_common.go:293] "Volume detached for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.029122 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.164415 5028 generic.go:334] "Generic (PLEG): container finished" podID="1f31c576-0d52-4b02-a57c-209c966bf098" containerID="9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53" exitCode=0 Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.164481 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1f31c576-0d52-4b02-a57c-209c966bf098","Type":"ContainerDied","Data":"9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53"} Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.164494 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.164516 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1f31c576-0d52-4b02-a57c-209c966bf098","Type":"ContainerDied","Data":"8b3d95a7df694d0131c9c43e3221fc469517e2ffddde0f926661984092c81949"} Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.164537 5028 scope.go:117] "RemoveContainer" containerID="9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.166747 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"24f7db37-7c9b-44b1-a4f3-91cc6c062866","Type":"ContainerDied","Data":"15e4afb60a3fc21134043610a8f80564fa2be4b20224f36f5d10a30f52160513"} Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.166820 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.179308 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-confd\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.179884 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-plugins\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180051 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180079 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-erlang-cookie\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180159 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwj99\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-kube-api-access-gwj99\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180234 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f31c576-0d52-4b02-a57c-209c966bf098-pod-info\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180300 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-plugins-conf\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180361 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-server-conf\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180388 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f31c576-0d52-4b02-a57c-209c966bf098-erlang-cookie-secret\") pod \"1f31c576-0d52-4b02-a57c-209c966bf098\" (UID: \"1f31c576-0d52-4b02-a57c-209c966bf098\") " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.180875 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.182119 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.190489 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.191746 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-kube-api-access-gwj99" (OuterVolumeSpecName: "kube-api-access-gwj99") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "kube-api-access-gwj99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.194015 5028 scope.go:117] "RemoveContainer" containerID="f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.194179 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f31c576-0d52-4b02-a57c-209c966bf098-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.200461 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1f31c576-0d52-4b02-a57c-209c966bf098-pod-info" (OuterVolumeSpecName: "pod-info") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.212242 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.212502 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.220028 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f" (OuterVolumeSpecName: "persistence") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "pvc-0c437766-8f14-4227-9a82-ce019905792f". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.233195 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-server-conf" (OuterVolumeSpecName: "server-conf") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.234655 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: E1123 08:27:39.235043 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="setup-container" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.235066 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="setup-container" Nov 23 08:27:39 crc kubenswrapper[5028]: E1123 08:27:39.235084 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="rabbitmq" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.235093 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="rabbitmq" Nov 23 08:27:39 crc kubenswrapper[5028]: E1123 08:27:39.235110 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="rabbitmq" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.235117 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="rabbitmq" Nov 23 08:27:39 crc kubenswrapper[5028]: E1123 08:27:39.235159 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="setup-container" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.235170 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="setup-container" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.235366 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" containerName="rabbitmq" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.235399 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" containerName="rabbitmq" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.236517 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.242838 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.243105 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.246807 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.247063 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.247214 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5bzx6" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.254898 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283735 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwj99\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-kube-api-access-gwj99\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283778 5028 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1f31c576-0d52-4b02-a57c-209c966bf098-pod-info\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283790 5028 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283815 5028 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1f31c576-0d52-4b02-a57c-209c966bf098-server-conf\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283825 5028 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1f31c576-0d52-4b02-a57c-209c966bf098-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283835 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283875 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") on node \"crc\" " Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.283889 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.303293 5028 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.303613 5028 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0c437766-8f14-4227-9a82-ce019905792f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f") on node "crc" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.324504 5028 scope.go:117] "RemoveContainer" containerID="9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53" Nov 23 08:27:39 crc kubenswrapper[5028]: E1123 08:27:39.329547 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53\": container with ID starting with 9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53 not found: ID does not exist" containerID="9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.329629 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53"} err="failed to get container status \"9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53\": rpc error: code = NotFound desc = could not find container \"9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53\": container with ID starting with 9a011b861a160631572c75cc76bb315b86a7a8dd416497fb14c299ce1275ae53 not found: ID does not exist" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.329683 5028 scope.go:117] "RemoveContainer" containerID="f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054" Nov 23 08:27:39 crc kubenswrapper[5028]: E1123 08:27:39.330668 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054\": container with ID starting with f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054 not found: ID does not exist" containerID="f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.330753 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054"} err="failed to get container status \"f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054\": rpc error: code = NotFound desc = could not find container \"f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054\": container with ID starting with f67d86fe3fca35902eb3a6f13b64904a724d93d4ceb5f856e66c25f0c315b054 not found: ID does not exist" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.330826 5028 scope.go:117] "RemoveContainer" containerID="9952e62c8698fc9f438aceb97c49c85345e06e140ba3ac46ea3cae50d3a818a9" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.343887 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1f31c576-0d52-4b02-a57c-209c966bf098" (UID: "1f31c576-0d52-4b02-a57c-209c966bf098"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.348871 5028 scope.go:117] "RemoveContainer" containerID="44eb44f4d5ba2eb9dd4dcbee5af796ce2e2d813a9b5996e59778c8bdaec45ae2" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.386631 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8f2a752-290c-4eaf-9311-d1f13cf93264-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.386673 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8f2a752-290c-4eaf-9311-d1f13cf93264-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.386694 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8f2a752-290c-4eaf-9311-d1f13cf93264-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.386734 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.386796 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zcxx\" (UniqueName: \"kubernetes.io/projected/e8f2a752-290c-4eaf-9311-d1f13cf93264-kube-api-access-7zcxx\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.386830 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.387379 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.387509 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.387556 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8f2a752-290c-4eaf-9311-d1f13cf93264-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.387635 5028 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1f31c576-0d52-4b02-a57c-209c966bf098-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.387664 5028 reconciler_common.go:293] "Volume detached for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.490856 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8f2a752-290c-4eaf-9311-d1f13cf93264-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.490924 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8f2a752-290c-4eaf-9311-d1f13cf93264-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.490966 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8f2a752-290c-4eaf-9311-d1f13cf93264-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.490981 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8f2a752-290c-4eaf-9311-d1f13cf93264-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.491008 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.491052 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zcxx\" (UniqueName: \"kubernetes.io/projected/e8f2a752-290c-4eaf-9311-d1f13cf93264-kube-api-access-7zcxx\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.491071 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.491097 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.491128 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.491562 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.494301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e8f2a752-290c-4eaf-9311-d1f13cf93264-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.494563 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e8f2a752-290c-4eaf-9311-d1f13cf93264-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.495021 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.497353 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.497390 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eae9037de8d4b3d34e167668cb0632ceec812926ae87f5644cf3c47db2ce1c9a/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.497932 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e8f2a752-290c-4eaf-9311-d1f13cf93264-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.498414 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e8f2a752-290c-4eaf-9311-d1f13cf93264-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.499145 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.504349 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e8f2a752-290c-4eaf-9311-d1f13cf93264-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.506162 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.510389 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zcxx\" (UniqueName: \"kubernetes.io/projected/e8f2a752-290c-4eaf-9311-d1f13cf93264-kube-api-access-7zcxx\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.517650 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.519795 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.524462 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.524641 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.525704 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.525835 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-f7kdz" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.526357 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.531230 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.546658 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c130abc4-626e-4949-b399-88a290bc5228\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c130abc4-626e-4949-b399-88a290bc5228\") pod \"rabbitmq-server-0\" (UID: \"e8f2a752-290c-4eaf-9311-d1f13cf93264\") " pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.621210 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.637069 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.677384 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-j8x7m"] Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.677650 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerName="dnsmasq-dns" containerID="cri-o://8b04e9efa6a459eda72a77518e5451426073757d030df9355be503e349b70740" gracePeriod=10 Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.694907 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d864f35-ce70-4dde-adc8-94ba2a94b937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.694981 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695007 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695028 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695056 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d864f35-ce70-4dde-adc8-94ba2a94b937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695082 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d864f35-ce70-4dde-adc8-94ba2a94b937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695105 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d864f35-ce70-4dde-adc8-94ba2a94b937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695135 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvmg\" (UniqueName: \"kubernetes.io/projected/4d864f35-ce70-4dde-adc8-94ba2a94b937-kube-api-access-dvvmg\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.695155 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.796589 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d864f35-ce70-4dde-adc8-94ba2a94b937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.796890 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.796918 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.796939 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.796978 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d864f35-ce70-4dde-adc8-94ba2a94b937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.797011 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d864f35-ce70-4dde-adc8-94ba2a94b937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.797037 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d864f35-ce70-4dde-adc8-94ba2a94b937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.797077 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvmg\" (UniqueName: \"kubernetes.io/projected/4d864f35-ce70-4dde-adc8-94ba2a94b937-kube-api-access-dvvmg\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.797098 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.797332 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.798001 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.798161 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4d864f35-ce70-4dde-adc8-94ba2a94b937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.798802 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4d864f35-ce70-4dde-adc8-94ba2a94b937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.799492 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.799524 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4537c005bb6617851fe6ce6eb6bb09c5dd1a8188110c50bc4d8bf57e47a4f0eb/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.803762 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4d864f35-ce70-4dde-adc8-94ba2a94b937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.803971 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4d864f35-ce70-4dde-adc8-94ba2a94b937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.804696 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4d864f35-ce70-4dde-adc8-94ba2a94b937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.819269 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvmg\" (UniqueName: \"kubernetes.io/projected/4d864f35-ce70-4dde-adc8-94ba2a94b937-kube-api-access-dvvmg\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:39 crc kubenswrapper[5028]: I1123 08:27:39.853178 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0c437766-8f14-4227-9a82-ce019905792f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0c437766-8f14-4227-9a82-ce019905792f\") pod \"rabbitmq-cell1-server-0\" (UID: \"4d864f35-ce70-4dde-adc8-94ba2a94b937\") " pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.109759 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.138537 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.175983 5028 generic.go:334] "Generic (PLEG): container finished" podID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerID="8b04e9efa6a459eda72a77518e5451426073757d030df9355be503e349b70740" exitCode=0 Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.175981 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" event={"ID":"7c835b4d-ca9b-41ba-ae17-32f31e06e308","Type":"ContainerDied","Data":"8b04e9efa6a459eda72a77518e5451426073757d030df9355be503e349b70740"} Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.176062 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" event={"ID":"7c835b4d-ca9b-41ba-ae17-32f31e06e308","Type":"ContainerDied","Data":"4a5027798062b0043018b825d8496390bab1ae012830fd1d9b0e391eb90aa007"} Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.176080 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a5027798062b0043018b825d8496390bab1ae012830fd1d9b0e391eb90aa007" Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.177112 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e8f2a752-290c-4eaf-9311-d1f13cf93264","Type":"ContainerStarted","Data":"1c725803336856c63de883bae1ff3f105f9154f8cbde0269e6466a9cc4a29e8c"} Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.269605 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.416979 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j99hk\" (UniqueName: \"kubernetes.io/projected/7c835b4d-ca9b-41ba-ae17-32f31e06e308-kube-api-access-j99hk\") pod \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.417166 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-dns-svc\") pod \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.417209 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config\") pod \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.420764 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c835b4d-ca9b-41ba-ae17-32f31e06e308-kube-api-access-j99hk" (OuterVolumeSpecName: "kube-api-access-j99hk") pod "7c835b4d-ca9b-41ba-ae17-32f31e06e308" (UID: "7c835b4d-ca9b-41ba-ae17-32f31e06e308"). InnerVolumeSpecName "kube-api-access-j99hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:27:40 crc kubenswrapper[5028]: E1123 08:27:40.449185 5028 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config podName:7c835b4d-ca9b-41ba-ae17-32f31e06e308 nodeName:}" failed. No retries permitted until 2025-11-23 08:27:40.949153076 +0000 UTC m=+5844.646557855 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config") pod "7c835b4d-ca9b-41ba-ae17-32f31e06e308" (UID: "7c835b4d-ca9b-41ba-ae17-32f31e06e308") : error deleting /var/lib/kubelet/pods/7c835b4d-ca9b-41ba-ae17-32f31e06e308/volume-subpaths: remove /var/lib/kubelet/pods/7c835b4d-ca9b-41ba-ae17-32f31e06e308/volume-subpaths: no such file or directory Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.449635 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c835b4d-ca9b-41ba-ae17-32f31e06e308" (UID: "7c835b4d-ca9b-41ba-ae17-32f31e06e308"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.519004 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.519045 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j99hk\" (UniqueName: \"kubernetes.io/projected/7c835b4d-ca9b-41ba-ae17-32f31e06e308-kube-api-access-j99hk\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:40 crc kubenswrapper[5028]: W1123 08:27:40.537221 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d864f35_ce70_4dde_adc8_94ba2a94b937.slice/crio-d88896522e275799ebe87ca98d7a4b94c9f5517e735cb4add5504f5f359ff096 WatchSource:0}: Error finding container d88896522e275799ebe87ca98d7a4b94c9f5517e735cb4add5504f5f359ff096: Status 404 returned error can't find the container with id d88896522e275799ebe87ca98d7a4b94c9f5517e735cb4add5504f5f359ff096 Nov 23 08:27:40 crc kubenswrapper[5028]: I1123 08:27:40.540484 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.026203 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config\") pod \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\" (UID: \"7c835b4d-ca9b-41ba-ae17-32f31e06e308\") " Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.028299 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config" (OuterVolumeSpecName: "config") pod "7c835b4d-ca9b-41ba-ae17-32f31e06e308" (UID: "7c835b4d-ca9b-41ba-ae17-32f31e06e308"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.073221 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f31c576-0d52-4b02-a57c-209c966bf098" path="/var/lib/kubelet/pods/1f31c576-0d52-4b02-a57c-209c966bf098/volumes" Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.073995 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24f7db37-7c9b-44b1-a4f3-91cc6c062866" path="/var/lib/kubelet/pods/24f7db37-7c9b-44b1-a4f3-91cc6c062866/volumes" Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.128312 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c835b4d-ca9b-41ba-ae17-32f31e06e308-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.189019 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d864f35-ce70-4dde-adc8-94ba2a94b937","Type":"ContainerStarted","Data":"d88896522e275799ebe87ca98d7a4b94c9f5517e735cb4add5504f5f359ff096"} Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.189062 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74bc88c489-j8x7m" Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.209732 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-j8x7m"] Nov 23 08:27:41 crc kubenswrapper[5028]: I1123 08:27:41.214706 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74bc88c489-j8x7m"] Nov 23 08:27:42 crc kubenswrapper[5028]: I1123 08:27:42.201622 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e8f2a752-290c-4eaf-9311-d1f13cf93264","Type":"ContainerStarted","Data":"ceeee437739e914c0524816a50f1def17c6be618a2ffe1a6ca804a23515c013d"} Nov 23 08:27:42 crc kubenswrapper[5028]: I1123 08:27:42.202984 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d864f35-ce70-4dde-adc8-94ba2a94b937","Type":"ContainerStarted","Data":"aa6b8e4451dd887d0de0fc7c23635101f8284d272cb90d33151f674123e47135"} Nov 23 08:27:43 crc kubenswrapper[5028]: I1123 08:27:43.151090 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" path="/var/lib/kubelet/pods/7c835b4d-ca9b-41ba-ae17-32f31e06e308/volumes" Nov 23 08:28:14 crc kubenswrapper[5028]: I1123 08:28:14.533635 5028 generic.go:334] "Generic (PLEG): container finished" podID="e8f2a752-290c-4eaf-9311-d1f13cf93264" containerID="ceeee437739e914c0524816a50f1def17c6be618a2ffe1a6ca804a23515c013d" exitCode=0 Nov 23 08:28:14 crc kubenswrapper[5028]: I1123 08:28:14.533721 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e8f2a752-290c-4eaf-9311-d1f13cf93264","Type":"ContainerDied","Data":"ceeee437739e914c0524816a50f1def17c6be618a2ffe1a6ca804a23515c013d"} Nov 23 08:28:14 crc kubenswrapper[5028]: I1123 08:28:14.536514 5028 generic.go:334] "Generic (PLEG): container finished" podID="4d864f35-ce70-4dde-adc8-94ba2a94b937" containerID="aa6b8e4451dd887d0de0fc7c23635101f8284d272cb90d33151f674123e47135" exitCode=0 Nov 23 08:28:14 crc kubenswrapper[5028]: I1123 08:28:14.536542 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d864f35-ce70-4dde-adc8-94ba2a94b937","Type":"ContainerDied","Data":"aa6b8e4451dd887d0de0fc7c23635101f8284d272cb90d33151f674123e47135"} Nov 23 08:28:15 crc kubenswrapper[5028]: I1123 08:28:15.550965 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e8f2a752-290c-4eaf-9311-d1f13cf93264","Type":"ContainerStarted","Data":"72da2fb729a18ebee57800813798a0cd8b19adc636e4599138353007e3f2889e"} Nov 23 08:28:15 crc kubenswrapper[5028]: I1123 08:28:15.552966 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 23 08:28:15 crc kubenswrapper[5028]: I1123 08:28:15.553159 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4d864f35-ce70-4dde-adc8-94ba2a94b937","Type":"ContainerStarted","Data":"5f4ee1f9887b24d78282cf75a6ce08f13b002b79ed574712714e505545a7ef64"} Nov 23 08:28:15 crc kubenswrapper[5028]: I1123 08:28:15.553509 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:28:15 crc kubenswrapper[5028]: I1123 08:28:15.599185 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.59653753 podStartE2EDuration="36.59653753s" podCreationTimestamp="2025-11-23 08:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:28:15.589027226 +0000 UTC m=+5879.286432005" watchObservedRunningTime="2025-11-23 08:28:15.59653753 +0000 UTC m=+5879.293942309" Nov 23 08:28:15 crc kubenswrapper[5028]: I1123 08:28:15.639066 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.639025079 podStartE2EDuration="36.639025079s" podCreationTimestamp="2025-11-23 08:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:28:15.633157196 +0000 UTC m=+5879.330561995" watchObservedRunningTime="2025-11-23 08:28:15.639025079 +0000 UTC m=+5879.336429878" Nov 23 08:28:29 crc kubenswrapper[5028]: I1123 08:28:29.641863 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 23 08:28:30 crc kubenswrapper[5028]: I1123 08:28:30.142191 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.864040 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:28:41 crc kubenswrapper[5028]: E1123 08:28:41.865853 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerName="init" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.865880 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerName="init" Nov 23 08:28:41 crc kubenswrapper[5028]: E1123 08:28:41.865941 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerName="dnsmasq-dns" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.865973 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerName="dnsmasq-dns" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.866656 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c835b4d-ca9b-41ba-ae17-32f31e06e308" containerName="dnsmasq-dns" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.867888 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.870779 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jz44m" Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.882346 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:28:41 crc kubenswrapper[5028]: I1123 08:28:41.974592 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hgq\" (UniqueName: \"kubernetes.io/projected/82ecb7c1-2456-4f14-bf2e-b17860ed6f98-kube-api-access-72hgq\") pod \"mariadb-client-1-default\" (UID: \"82ecb7c1-2456-4f14-bf2e-b17860ed6f98\") " pod="openstack/mariadb-client-1-default" Nov 23 08:28:42 crc kubenswrapper[5028]: I1123 08:28:42.075863 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72hgq\" (UniqueName: \"kubernetes.io/projected/82ecb7c1-2456-4f14-bf2e-b17860ed6f98-kube-api-access-72hgq\") pod \"mariadb-client-1-default\" (UID: \"82ecb7c1-2456-4f14-bf2e-b17860ed6f98\") " pod="openstack/mariadb-client-1-default" Nov 23 08:28:42 crc kubenswrapper[5028]: I1123 08:28:42.104537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hgq\" (UniqueName: \"kubernetes.io/projected/82ecb7c1-2456-4f14-bf2e-b17860ed6f98-kube-api-access-72hgq\") pod \"mariadb-client-1-default\" (UID: \"82ecb7c1-2456-4f14-bf2e-b17860ed6f98\") " pod="openstack/mariadb-client-1-default" Nov 23 08:28:42 crc kubenswrapper[5028]: I1123 08:28:42.194807 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:28:42 crc kubenswrapper[5028]: I1123 08:28:42.518588 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:28:42 crc kubenswrapper[5028]: I1123 08:28:42.848567 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"82ecb7c1-2456-4f14-bf2e-b17860ed6f98","Type":"ContainerStarted","Data":"367b9f71df040f6af6b10d4b61a0b0f1f7b6a59fc4c0d71fcb0c0ac366749f81"} Nov 23 08:28:43 crc kubenswrapper[5028]: I1123 08:28:43.862789 5028 generic.go:334] "Generic (PLEG): container finished" podID="82ecb7c1-2456-4f14-bf2e-b17860ed6f98" containerID="464f20857948895ec170f6a8b23b48f30897665ac8a2d8434f02e45f585409f1" exitCode=0 Nov 23 08:28:43 crc kubenswrapper[5028]: I1123 08:28:43.862898 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"82ecb7c1-2456-4f14-bf2e-b17860ed6f98","Type":"ContainerDied","Data":"464f20857948895ec170f6a8b23b48f30897665ac8a2d8434f02e45f585409f1"} Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.306209 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.336431 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_82ecb7c1-2456-4f14-bf2e-b17860ed6f98/mariadb-client-1-default/0.log" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.363992 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.369078 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.431784 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72hgq\" (UniqueName: \"kubernetes.io/projected/82ecb7c1-2456-4f14-bf2e-b17860ed6f98-kube-api-access-72hgq\") pod \"82ecb7c1-2456-4f14-bf2e-b17860ed6f98\" (UID: \"82ecb7c1-2456-4f14-bf2e-b17860ed6f98\") " Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.446192 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ecb7c1-2456-4f14-bf2e-b17860ed6f98-kube-api-access-72hgq" (OuterVolumeSpecName: "kube-api-access-72hgq") pod "82ecb7c1-2456-4f14-bf2e-b17860ed6f98" (UID: "82ecb7c1-2456-4f14-bf2e-b17860ed6f98"). InnerVolumeSpecName "kube-api-access-72hgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.532706 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72hgq\" (UniqueName: \"kubernetes.io/projected/82ecb7c1-2456-4f14-bf2e-b17860ed6f98-kube-api-access-72hgq\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.865239 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:28:45 crc kubenswrapper[5028]: E1123 08:28:45.865804 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ecb7c1-2456-4f14-bf2e-b17860ed6f98" containerName="mariadb-client-1-default" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.865833 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ecb7c1-2456-4f14-bf2e-b17860ed6f98" containerName="mariadb-client-1-default" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.866150 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ecb7c1-2456-4f14-bf2e-b17860ed6f98" containerName="mariadb-client-1-default" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.867323 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.885872 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367b9f71df040f6af6b10d4b61a0b0f1f7b6a59fc4c0d71fcb0c0ac366749f81" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.886034 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 23 08:28:45 crc kubenswrapper[5028]: I1123 08:28:45.888097 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:28:46 crc kubenswrapper[5028]: I1123 08:28:46.044996 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5wt\" (UniqueName: \"kubernetes.io/projected/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3-kube-api-access-df5wt\") pod \"mariadb-client-2-default\" (UID: \"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3\") " pod="openstack/mariadb-client-2-default" Nov 23 08:28:46 crc kubenswrapper[5028]: I1123 08:28:46.146569 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df5wt\" (UniqueName: \"kubernetes.io/projected/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3-kube-api-access-df5wt\") pod \"mariadb-client-2-default\" (UID: \"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3\") " pod="openstack/mariadb-client-2-default" Nov 23 08:28:46 crc kubenswrapper[5028]: I1123 08:28:46.169336 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df5wt\" (UniqueName: \"kubernetes.io/projected/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3-kube-api-access-df5wt\") pod \"mariadb-client-2-default\" (UID: \"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3\") " pod="openstack/mariadb-client-2-default" Nov 23 08:28:46 crc kubenswrapper[5028]: I1123 08:28:46.203989 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:28:47 crc kubenswrapper[5028]: I1123 08:28:46.731638 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:28:47 crc kubenswrapper[5028]: I1123 08:28:46.894406 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3","Type":"ContainerStarted","Data":"9038af9884ceb9f8765b958946c69134cd8868b484ea469a37dcd6deb633f119"} Nov 23 08:28:47 crc kubenswrapper[5028]: I1123 08:28:47.064256 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ecb7c1-2456-4f14-bf2e-b17860ed6f98" path="/var/lib/kubelet/pods/82ecb7c1-2456-4f14-bf2e-b17860ed6f98/volumes" Nov 23 08:28:47 crc kubenswrapper[5028]: I1123 08:28:47.905979 5028 generic.go:334] "Generic (PLEG): container finished" podID="eb5dabd9-cf4e-43d2-afed-e359e9c72ff3" containerID="7389fffdcafc993c3bae3d7cb88ddd29313470efaccfe8ae8d702219df14d608" exitCode=1 Nov 23 08:28:47 crc kubenswrapper[5028]: I1123 08:28:47.906050 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3","Type":"ContainerDied","Data":"7389fffdcafc993c3bae3d7cb88ddd29313470efaccfe8ae8d702219df14d608"} Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.353510 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.375208 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2-default_eb5dabd9-cf4e-43d2-afed-e359e9c72ff3/mariadb-client-2-default/0.log" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.407996 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.415864 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.496470 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df5wt\" (UniqueName: \"kubernetes.io/projected/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3-kube-api-access-df5wt\") pod \"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3\" (UID: \"eb5dabd9-cf4e-43d2-afed-e359e9c72ff3\") " Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.504094 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3-kube-api-access-df5wt" (OuterVolumeSpecName: "kube-api-access-df5wt") pod "eb5dabd9-cf4e-43d2-afed-e359e9c72ff3" (UID: "eb5dabd9-cf4e-43d2-afed-e359e9c72ff3"). InnerVolumeSpecName "kube-api-access-df5wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.598569 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df5wt\" (UniqueName: \"kubernetes.io/projected/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3-kube-api-access-df5wt\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.802241 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:28:49 crc kubenswrapper[5028]: E1123 08:28:49.802753 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5dabd9-cf4e-43d2-afed-e359e9c72ff3" containerName="mariadb-client-2-default" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.802776 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5dabd9-cf4e-43d2-afed-e359e9c72ff3" containerName="mariadb-client-2-default" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.803006 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5dabd9-cf4e-43d2-afed-e359e9c72ff3" containerName="mariadb-client-2-default" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.803727 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.814135 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.903244 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dztld\" (UniqueName: \"kubernetes.io/projected/b0932680-809a-4db6-94c9-183b623b09a0-kube-api-access-dztld\") pod \"mariadb-client-1\" (UID: \"b0932680-809a-4db6-94c9-183b623b09a0\") " pod="openstack/mariadb-client-1" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.925500 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9038af9884ceb9f8765b958946c69134cd8868b484ea469a37dcd6deb633f119" Nov 23 08:28:49 crc kubenswrapper[5028]: I1123 08:28:49.925622 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.005986 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dztld\" (UniqueName: \"kubernetes.io/projected/b0932680-809a-4db6-94c9-183b623b09a0-kube-api-access-dztld\") pod \"mariadb-client-1\" (UID: \"b0932680-809a-4db6-94c9-183b623b09a0\") " pod="openstack/mariadb-client-1" Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.024547 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dztld\" (UniqueName: \"kubernetes.io/projected/b0932680-809a-4db6-94c9-183b623b09a0-kube-api-access-dztld\") pod \"mariadb-client-1\" (UID: \"b0932680-809a-4db6-94c9-183b623b09a0\") " pod="openstack/mariadb-client-1" Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.136423 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.454365 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:28:50 crc kubenswrapper[5028]: W1123 08:28:50.463151 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0932680_809a_4db6_94c9_183b623b09a0.slice/crio-921d17f74a3284559828348500bf6661ff86d00ebb7a87218d7801e20bf8e008 WatchSource:0}: Error finding container 921d17f74a3284559828348500bf6661ff86d00ebb7a87218d7801e20bf8e008: Status 404 returned error can't find the container with id 921d17f74a3284559828348500bf6661ff86d00ebb7a87218d7801e20bf8e008 Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.943971 5028 generic.go:334] "Generic (PLEG): container finished" podID="b0932680-809a-4db6-94c9-183b623b09a0" containerID="9b00e33f984eb578e31313c9942543e2f3062b5045f500b7d5387684f613a47d" exitCode=0 Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.944027 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"b0932680-809a-4db6-94c9-183b623b09a0","Type":"ContainerDied","Data":"9b00e33f984eb578e31313c9942543e2f3062b5045f500b7d5387684f613a47d"} Nov 23 08:28:50 crc kubenswrapper[5028]: I1123 08:28:50.944058 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"b0932680-809a-4db6-94c9-183b623b09a0","Type":"ContainerStarted","Data":"921d17f74a3284559828348500bf6661ff86d00ebb7a87218d7801e20bf8e008"} Nov 23 08:28:51 crc kubenswrapper[5028]: I1123 08:28:51.074252 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5dabd9-cf4e-43d2-afed-e359e9c72ff3" path="/var/lib/kubelet/pods/eb5dabd9-cf4e-43d2-afed-e359e9c72ff3/volumes" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.326812 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.351152 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztld\" (UniqueName: \"kubernetes.io/projected/b0932680-809a-4db6-94c9-183b623b09a0-kube-api-access-dztld\") pod \"b0932680-809a-4db6-94c9-183b623b09a0\" (UID: \"b0932680-809a-4db6-94c9-183b623b09a0\") " Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.351527 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_b0932680-809a-4db6-94c9-183b623b09a0/mariadb-client-1/0.log" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.361582 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0932680-809a-4db6-94c9-183b623b09a0-kube-api-access-dztld" (OuterVolumeSpecName: "kube-api-access-dztld") pod "b0932680-809a-4db6-94c9-183b623b09a0" (UID: "b0932680-809a-4db6-94c9-183b623b09a0"). InnerVolumeSpecName "kube-api-access-dztld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.392284 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.399911 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.452823 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dztld\" (UniqueName: \"kubernetes.io/projected/b0932680-809a-4db6-94c9-183b623b09a0-kube-api-access-dztld\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.808324 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:28:52 crc kubenswrapper[5028]: E1123 08:28:52.808741 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0932680-809a-4db6-94c9-183b623b09a0" containerName="mariadb-client-1" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.808763 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0932680-809a-4db6-94c9-183b623b09a0" containerName="mariadb-client-1" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.808991 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0932680-809a-4db6-94c9-183b623b09a0" containerName="mariadb-client-1" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.809622 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.838009 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.857853 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhtl4\" (UniqueName: \"kubernetes.io/projected/226de197-7f04-46cb-85c1-2947a7209e9e-kube-api-access-hhtl4\") pod \"mariadb-client-4-default\" (UID: \"226de197-7f04-46cb-85c1-2947a7209e9e\") " pod="openstack/mariadb-client-4-default" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.958968 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhtl4\" (UniqueName: \"kubernetes.io/projected/226de197-7f04-46cb-85c1-2947a7209e9e-kube-api-access-hhtl4\") pod \"mariadb-client-4-default\" (UID: \"226de197-7f04-46cb-85c1-2947a7209e9e\") " pod="openstack/mariadb-client-4-default" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.964103 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921d17f74a3284559828348500bf6661ff86d00ebb7a87218d7801e20bf8e008" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.964167 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 23 08:28:52 crc kubenswrapper[5028]: I1123 08:28:52.975244 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhtl4\" (UniqueName: \"kubernetes.io/projected/226de197-7f04-46cb-85c1-2947a7209e9e-kube-api-access-hhtl4\") pod \"mariadb-client-4-default\" (UID: \"226de197-7f04-46cb-85c1-2947a7209e9e\") " pod="openstack/mariadb-client-4-default" Nov 23 08:28:53 crc kubenswrapper[5028]: I1123 08:28:53.063832 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0932680-809a-4db6-94c9-183b623b09a0" path="/var/lib/kubelet/pods/b0932680-809a-4db6-94c9-183b623b09a0/volumes" Nov 23 08:28:53 crc kubenswrapper[5028]: I1123 08:28:53.149164 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:28:53 crc kubenswrapper[5028]: I1123 08:28:53.621873 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:28:53 crc kubenswrapper[5028]: I1123 08:28:53.974802 5028 generic.go:334] "Generic (PLEG): container finished" podID="226de197-7f04-46cb-85c1-2947a7209e9e" containerID="aa33e6e08c382afbb09686aaa0e28ff4b26fd22cb55c645ec42391f06900155d" exitCode=0 Nov 23 08:28:53 crc kubenswrapper[5028]: I1123 08:28:53.974852 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"226de197-7f04-46cb-85c1-2947a7209e9e","Type":"ContainerDied","Data":"aa33e6e08c382afbb09686aaa0e28ff4b26fd22cb55c645ec42391f06900155d"} Nov 23 08:28:53 crc kubenswrapper[5028]: I1123 08:28:53.975099 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"226de197-7f04-46cb-85c1-2947a7209e9e","Type":"ContainerStarted","Data":"037326281c329dbe85377d913f7189cdb740300c3864d8c90dc224d649f310b1"} Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.344744 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.363586 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_226de197-7f04-46cb-85c1-2947a7209e9e/mariadb-client-4-default/0.log" Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.387064 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.393428 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.411550 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhtl4\" (UniqueName: \"kubernetes.io/projected/226de197-7f04-46cb-85c1-2947a7209e9e-kube-api-access-hhtl4\") pod \"226de197-7f04-46cb-85c1-2947a7209e9e\" (UID: \"226de197-7f04-46cb-85c1-2947a7209e9e\") " Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.416541 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226de197-7f04-46cb-85c1-2947a7209e9e-kube-api-access-hhtl4" (OuterVolumeSpecName: "kube-api-access-hhtl4") pod "226de197-7f04-46cb-85c1-2947a7209e9e" (UID: "226de197-7f04-46cb-85c1-2947a7209e9e"). InnerVolumeSpecName "kube-api-access-hhtl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.513786 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhtl4\" (UniqueName: \"kubernetes.io/projected/226de197-7f04-46cb-85c1-2947a7209e9e-kube-api-access-hhtl4\") on node \"crc\" DevicePath \"\"" Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.993314 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="037326281c329dbe85377d913f7189cdb740300c3864d8c90dc224d649f310b1" Nov 23 08:28:55 crc kubenswrapper[5028]: I1123 08:28:55.993348 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 23 08:28:57 crc kubenswrapper[5028]: I1123 08:28:57.084600 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226de197-7f04-46cb-85c1-2947a7209e9e" path="/var/lib/kubelet/pods/226de197-7f04-46cb-85c1-2947a7209e9e/volumes" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.165080 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:28:59 crc kubenswrapper[5028]: E1123 08:28:59.165994 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226de197-7f04-46cb-85c1-2947a7209e9e" containerName="mariadb-client-4-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.166015 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="226de197-7f04-46cb-85c1-2947a7209e9e" containerName="mariadb-client-4-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.166371 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="226de197-7f04-46cb-85c1-2947a7209e9e" containerName="mariadb-client-4-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.167487 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.171197 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jz44m" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.174764 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.272024 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjkf\" (UniqueName: \"kubernetes.io/projected/ca00567f-b1a4-4846-b2dd-581f7c3f726b-kube-api-access-vzjkf\") pod \"mariadb-client-5-default\" (UID: \"ca00567f-b1a4-4846-b2dd-581f7c3f726b\") " pod="openstack/mariadb-client-5-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.372713 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjkf\" (UniqueName: \"kubernetes.io/projected/ca00567f-b1a4-4846-b2dd-581f7c3f726b-kube-api-access-vzjkf\") pod \"mariadb-client-5-default\" (UID: \"ca00567f-b1a4-4846-b2dd-581f7c3f726b\") " pod="openstack/mariadb-client-5-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.391224 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjkf\" (UniqueName: \"kubernetes.io/projected/ca00567f-b1a4-4846-b2dd-581f7c3f726b-kube-api-access-vzjkf\") pod \"mariadb-client-5-default\" (UID: \"ca00567f-b1a4-4846-b2dd-581f7c3f726b\") " pod="openstack/mariadb-client-5-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.491356 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:28:59 crc kubenswrapper[5028]: I1123 08:28:59.989918 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:29:00 crc kubenswrapper[5028]: I1123 08:29:00.028865 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"ca00567f-b1a4-4846-b2dd-581f7c3f726b","Type":"ContainerStarted","Data":"bd582ab9bde3fda25cd0450ee9cbb9227f5449208448ba54da8b580e8ab4ae1c"} Nov 23 08:29:01 crc kubenswrapper[5028]: I1123 08:29:01.042598 5028 generic.go:334] "Generic (PLEG): container finished" podID="ca00567f-b1a4-4846-b2dd-581f7c3f726b" containerID="363f53e0850feedb3abca1260c9d95506ec15b783eff27a760bd42736a9ed598" exitCode=0 Nov 23 08:29:01 crc kubenswrapper[5028]: I1123 08:29:01.042691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"ca00567f-b1a4-4846-b2dd-581f7c3f726b","Type":"ContainerDied","Data":"363f53e0850feedb3abca1260c9d95506ec15b783eff27a760bd42736a9ed598"} Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.401405 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.422509 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_ca00567f-b1a4-4846-b2dd-581f7c3f726b/mariadb-client-5-default/0.log" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.424005 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzjkf\" (UniqueName: \"kubernetes.io/projected/ca00567f-b1a4-4846-b2dd-581f7c3f726b-kube-api-access-vzjkf\") pod \"ca00567f-b1a4-4846-b2dd-581f7c3f726b\" (UID: \"ca00567f-b1a4-4846-b2dd-581f7c3f726b\") " Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.438063 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca00567f-b1a4-4846-b2dd-581f7c3f726b-kube-api-access-vzjkf" (OuterVolumeSpecName: "kube-api-access-vzjkf") pod "ca00567f-b1a4-4846-b2dd-581f7c3f726b" (UID: "ca00567f-b1a4-4846-b2dd-581f7c3f726b"). InnerVolumeSpecName "kube-api-access-vzjkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.455398 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.461677 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.525271 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzjkf\" (UniqueName: \"kubernetes.io/projected/ca00567f-b1a4-4846-b2dd-581f7c3f726b-kube-api-access-vzjkf\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.588872 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:29:02 crc kubenswrapper[5028]: E1123 08:29:02.589468 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00567f-b1a4-4846-b2dd-581f7c3f726b" containerName="mariadb-client-5-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.589498 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00567f-b1a4-4846-b2dd-581f7c3f726b" containerName="mariadb-client-5-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.589830 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca00567f-b1a4-4846-b2dd-581f7c3f726b" containerName="mariadb-client-5-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.590798 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.594552 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.626540 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6dl\" (UniqueName: \"kubernetes.io/projected/35441784-b4a5-43b9-acf5-e3269e78a56d-kube-api-access-jc6dl\") pod \"mariadb-client-6-default\" (UID: \"35441784-b4a5-43b9-acf5-e3269e78a56d\") " pod="openstack/mariadb-client-6-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.727523 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6dl\" (UniqueName: \"kubernetes.io/projected/35441784-b4a5-43b9-acf5-e3269e78a56d-kube-api-access-jc6dl\") pod \"mariadb-client-6-default\" (UID: \"35441784-b4a5-43b9-acf5-e3269e78a56d\") " pod="openstack/mariadb-client-6-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.762426 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6dl\" (UniqueName: \"kubernetes.io/projected/35441784-b4a5-43b9-acf5-e3269e78a56d-kube-api-access-jc6dl\") pod \"mariadb-client-6-default\" (UID: \"35441784-b4a5-43b9-acf5-e3269e78a56d\") " pod="openstack/mariadb-client-6-default" Nov 23 08:29:02 crc kubenswrapper[5028]: I1123 08:29:02.915209 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:29:03 crc kubenswrapper[5028]: I1123 08:29:03.070238 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 23 08:29:03 crc kubenswrapper[5028]: I1123 08:29:03.084261 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca00567f-b1a4-4846-b2dd-581f7c3f726b" path="/var/lib/kubelet/pods/ca00567f-b1a4-4846-b2dd-581f7c3f726b/volumes" Nov 23 08:29:03 crc kubenswrapper[5028]: I1123 08:29:03.085274 5028 scope.go:117] "RemoveContainer" containerID="363f53e0850feedb3abca1260c9d95506ec15b783eff27a760bd42736a9ed598" Nov 23 08:29:03 crc kubenswrapper[5028]: I1123 08:29:03.470815 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:29:04 crc kubenswrapper[5028]: I1123 08:29:04.080185 5028 generic.go:334] "Generic (PLEG): container finished" podID="35441784-b4a5-43b9-acf5-e3269e78a56d" containerID="2a0024e81be270f75d2126cc9e0877fff26cc8bdecab0bc380c00922736915b2" exitCode=1 Nov 23 08:29:04 crc kubenswrapper[5028]: I1123 08:29:04.080280 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"35441784-b4a5-43b9-acf5-e3269e78a56d","Type":"ContainerDied","Data":"2a0024e81be270f75d2126cc9e0877fff26cc8bdecab0bc380c00922736915b2"} Nov 23 08:29:04 crc kubenswrapper[5028]: I1123 08:29:04.080539 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"35441784-b4a5-43b9-acf5-e3269e78a56d","Type":"ContainerStarted","Data":"788e6ece40bd0bedb3223f9ffd65dc65a76a315afb01153055d7389af9ca13f8"} Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.512765 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.535846 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_35441784-b4a5-43b9-acf5-e3269e78a56d/mariadb-client-6-default/0.log" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.562511 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.567499 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.672863 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc6dl\" (UniqueName: \"kubernetes.io/projected/35441784-b4a5-43b9-acf5-e3269e78a56d-kube-api-access-jc6dl\") pod \"35441784-b4a5-43b9-acf5-e3269e78a56d\" (UID: \"35441784-b4a5-43b9-acf5-e3269e78a56d\") " Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.679206 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35441784-b4a5-43b9-acf5-e3269e78a56d-kube-api-access-jc6dl" (OuterVolumeSpecName: "kube-api-access-jc6dl") pod "35441784-b4a5-43b9-acf5-e3269e78a56d" (UID: "35441784-b4a5-43b9-acf5-e3269e78a56d"). InnerVolumeSpecName "kube-api-access-jc6dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.725334 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:29:05 crc kubenswrapper[5028]: E1123 08:29:05.725711 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35441784-b4a5-43b9-acf5-e3269e78a56d" containerName="mariadb-client-6-default" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.725725 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="35441784-b4a5-43b9-acf5-e3269e78a56d" containerName="mariadb-client-6-default" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.725898 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="35441784-b4a5-43b9-acf5-e3269e78a56d" containerName="mariadb-client-6-default" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.726536 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.733479 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.774758 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc6dl\" (UniqueName: \"kubernetes.io/projected/35441784-b4a5-43b9-acf5-e3269e78a56d-kube-api-access-jc6dl\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.876567 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbb99\" (UniqueName: \"kubernetes.io/projected/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc-kube-api-access-vbb99\") pod \"mariadb-client-7-default\" (UID: \"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc\") " pod="openstack/mariadb-client-7-default" Nov 23 08:29:05 crc kubenswrapper[5028]: I1123 08:29:05.979481 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbb99\" (UniqueName: \"kubernetes.io/projected/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc-kube-api-access-vbb99\") pod \"mariadb-client-7-default\" (UID: \"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc\") " pod="openstack/mariadb-client-7-default" Nov 23 08:29:06 crc kubenswrapper[5028]: I1123 08:29:06.001344 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbb99\" (UniqueName: \"kubernetes.io/projected/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc-kube-api-access-vbb99\") pod \"mariadb-client-7-default\" (UID: \"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc\") " pod="openstack/mariadb-client-7-default" Nov 23 08:29:06 crc kubenswrapper[5028]: I1123 08:29:06.052198 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:29:06 crc kubenswrapper[5028]: I1123 08:29:06.104096 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788e6ece40bd0bedb3223f9ffd65dc65a76a315afb01153055d7389af9ca13f8" Nov 23 08:29:06 crc kubenswrapper[5028]: I1123 08:29:06.104145 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 23 08:29:06 crc kubenswrapper[5028]: I1123 08:29:06.606283 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:29:06 crc kubenswrapper[5028]: W1123 08:29:06.616844 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcedb1c3f_43f7_4d5f_baee_8320edd4c7cc.slice/crio-4d48a9d02394dce3c4df626fae27d3366d0c51494178a5bb04099e85a4b553c8 WatchSource:0}: Error finding container 4d48a9d02394dce3c4df626fae27d3366d0c51494178a5bb04099e85a4b553c8: Status 404 returned error can't find the container with id 4d48a9d02394dce3c4df626fae27d3366d0c51494178a5bb04099e85a4b553c8 Nov 23 08:29:07 crc kubenswrapper[5028]: I1123 08:29:07.070281 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35441784-b4a5-43b9-acf5-e3269e78a56d" path="/var/lib/kubelet/pods/35441784-b4a5-43b9-acf5-e3269e78a56d/volumes" Nov 23 08:29:07 crc kubenswrapper[5028]: I1123 08:29:07.114637 5028 generic.go:334] "Generic (PLEG): container finished" podID="cedb1c3f-43f7-4d5f-baee-8320edd4c7cc" containerID="d7e2497f64af7c39725be0b24d15d6b9b009007b799e96eceebc9c599e2b8aec" exitCode=0 Nov 23 08:29:07 crc kubenswrapper[5028]: I1123 08:29:07.114687 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc","Type":"ContainerDied","Data":"d7e2497f64af7c39725be0b24d15d6b9b009007b799e96eceebc9c599e2b8aec"} Nov 23 08:29:07 crc kubenswrapper[5028]: I1123 08:29:07.114721 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc","Type":"ContainerStarted","Data":"4d48a9d02394dce3c4df626fae27d3366d0c51494178a5bb04099e85a4b553c8"} Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.594446 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.615868 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_cedb1c3f-43f7-4d5f-baee-8320edd4c7cc/mariadb-client-7-default/0.log" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.640880 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.646831 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.722499 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbb99\" (UniqueName: \"kubernetes.io/projected/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc-kube-api-access-vbb99\") pod \"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc\" (UID: \"cedb1c3f-43f7-4d5f-baee-8320edd4c7cc\") " Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.729055 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc-kube-api-access-vbb99" (OuterVolumeSpecName: "kube-api-access-vbb99") pod "cedb1c3f-43f7-4d5f-baee-8320edd4c7cc" (UID: "cedb1c3f-43f7-4d5f-baee-8320edd4c7cc"). InnerVolumeSpecName "kube-api-access-vbb99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.814496 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:29:08 crc kubenswrapper[5028]: E1123 08:29:08.814922 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedb1c3f-43f7-4d5f-baee-8320edd4c7cc" containerName="mariadb-client-7-default" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.814946 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedb1c3f-43f7-4d5f-baee-8320edd4c7cc" containerName="mariadb-client-7-default" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.815118 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedb1c3f-43f7-4d5f-baee-8320edd4c7cc" containerName="mariadb-client-7-default" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.815766 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.820371 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.824688 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbb99\" (UniqueName: \"kubernetes.io/projected/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc-kube-api-access-vbb99\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:08 crc kubenswrapper[5028]: I1123 08:29:08.926485 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5np79\" (UniqueName: \"kubernetes.io/projected/2fbb25ec-4b9b-4c49-9774-b227aa6db800-kube-api-access-5np79\") pod \"mariadb-client-2\" (UID: \"2fbb25ec-4b9b-4c49-9774-b227aa6db800\") " pod="openstack/mariadb-client-2" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.028689 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5np79\" (UniqueName: \"kubernetes.io/projected/2fbb25ec-4b9b-4c49-9774-b227aa6db800-kube-api-access-5np79\") pod \"mariadb-client-2\" (UID: \"2fbb25ec-4b9b-4c49-9774-b227aa6db800\") " pod="openstack/mariadb-client-2" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.058298 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5np79\" (UniqueName: \"kubernetes.io/projected/2fbb25ec-4b9b-4c49-9774-b227aa6db800-kube-api-access-5np79\") pod \"mariadb-client-2\" (UID: \"2fbb25ec-4b9b-4c49-9774-b227aa6db800\") " pod="openstack/mariadb-client-2" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.067782 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cedb1c3f-43f7-4d5f-baee-8320edd4c7cc" path="/var/lib/kubelet/pods/cedb1c3f-43f7-4d5f-baee-8320edd4c7cc/volumes" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.134547 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.134869 5028 scope.go:117] "RemoveContainer" containerID="d7e2497f64af7c39725be0b24d15d6b9b009007b799e96eceebc9c599e2b8aec" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.134985 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 23 08:29:09 crc kubenswrapper[5028]: I1123 08:29:09.662806 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:29:10 crc kubenswrapper[5028]: I1123 08:29:10.146414 5028 generic.go:334] "Generic (PLEG): container finished" podID="2fbb25ec-4b9b-4c49-9774-b227aa6db800" containerID="7a3401134dddae68aea6d40a3408bc7f449349371152a6dafa31049fc4539399" exitCode=0 Nov 23 08:29:10 crc kubenswrapper[5028]: I1123 08:29:10.146504 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"2fbb25ec-4b9b-4c49-9774-b227aa6db800","Type":"ContainerDied","Data":"7a3401134dddae68aea6d40a3408bc7f449349371152a6dafa31049fc4539399"} Nov 23 08:29:10 crc kubenswrapper[5028]: I1123 08:29:10.146587 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"2fbb25ec-4b9b-4c49-9774-b227aa6db800","Type":"ContainerStarted","Data":"8ffb221b2ad3183dbd9392532d29624b672fcf6ac13bbed7a0d8e62cf0b77a9c"} Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.539503 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.564255 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_2fbb25ec-4b9b-4c49-9774-b227aa6db800/mariadb-client-2/0.log" Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.588441 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.592799 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.677353 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5np79\" (UniqueName: \"kubernetes.io/projected/2fbb25ec-4b9b-4c49-9774-b227aa6db800-kube-api-access-5np79\") pod \"2fbb25ec-4b9b-4c49-9774-b227aa6db800\" (UID: \"2fbb25ec-4b9b-4c49-9774-b227aa6db800\") " Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.683372 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbb25ec-4b9b-4c49-9774-b227aa6db800-kube-api-access-5np79" (OuterVolumeSpecName: "kube-api-access-5np79") pod "2fbb25ec-4b9b-4c49-9774-b227aa6db800" (UID: "2fbb25ec-4b9b-4c49-9774-b227aa6db800"). InnerVolumeSpecName "kube-api-access-5np79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:29:11 crc kubenswrapper[5028]: I1123 08:29:11.780272 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5np79\" (UniqueName: \"kubernetes.io/projected/2fbb25ec-4b9b-4c49-9774-b227aa6db800-kube-api-access-5np79\") on node \"crc\" DevicePath \"\"" Nov 23 08:29:12 crc kubenswrapper[5028]: I1123 08:29:12.166457 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ffb221b2ad3183dbd9392532d29624b672fcf6ac13bbed7a0d8e62cf0b77a9c" Nov 23 08:29:12 crc kubenswrapper[5028]: I1123 08:29:12.166499 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 23 08:29:13 crc kubenswrapper[5028]: I1123 08:29:13.069549 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbb25ec-4b9b-4c49-9774-b227aa6db800" path="/var/lib/kubelet/pods/2fbb25ec-4b9b-4c49-9774-b227aa6db800/volumes" Nov 23 08:29:30 crc kubenswrapper[5028]: I1123 08:29:30.946685 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:29:30 crc kubenswrapper[5028]: I1123 08:29:30.947918 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.170671 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql"] Nov 23 08:30:00 crc kubenswrapper[5028]: E1123 08:30:00.173482 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbb25ec-4b9b-4c49-9774-b227aa6db800" containerName="mariadb-client-2" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.173511 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbb25ec-4b9b-4c49-9774-b227aa6db800" containerName="mariadb-client-2" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.175187 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbb25ec-4b9b-4c49-9774-b227aa6db800" containerName="mariadb-client-2" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.176159 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.179302 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.181098 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.188819 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql"] Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.268803 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdtsg\" (UniqueName: \"kubernetes.io/projected/6804a411-c895-4508-8262-c197f4e649fd-kube-api-access-xdtsg\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.268976 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6804a411-c895-4508-8262-c197f4e649fd-config-volume\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.269068 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6804a411-c895-4508-8262-c197f4e649fd-secret-volume\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.370368 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6804a411-c895-4508-8262-c197f4e649fd-config-volume\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.370449 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6804a411-c895-4508-8262-c197f4e649fd-secret-volume\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.370532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdtsg\" (UniqueName: \"kubernetes.io/projected/6804a411-c895-4508-8262-c197f4e649fd-kube-api-access-xdtsg\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.371791 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6804a411-c895-4508-8262-c197f4e649fd-config-volume\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.378659 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6804a411-c895-4508-8262-c197f4e649fd-secret-volume\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.397022 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdtsg\" (UniqueName: \"kubernetes.io/projected/6804a411-c895-4508-8262-c197f4e649fd-kube-api-access-xdtsg\") pod \"collect-profiles-29398110-44rql\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.504987 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.936879 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql"] Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.945655 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:30:00 crc kubenswrapper[5028]: I1123 08:30:00.945692 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:30:01 crc kubenswrapper[5028]: I1123 08:30:01.668728 5028 generic.go:334] "Generic (PLEG): container finished" podID="6804a411-c895-4508-8262-c197f4e649fd" containerID="5da2659e3fd1737d19e7d6f3e4085af443dcc6167c8efb57c9301f8c080b9af0" exitCode=0 Nov 23 08:30:01 crc kubenswrapper[5028]: I1123 08:30:01.668828 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" event={"ID":"6804a411-c895-4508-8262-c197f4e649fd","Type":"ContainerDied","Data":"5da2659e3fd1737d19e7d6f3e4085af443dcc6167c8efb57c9301f8c080b9af0"} Nov 23 08:30:01 crc kubenswrapper[5028]: I1123 08:30:01.669243 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" event={"ID":"6804a411-c895-4508-8262-c197f4e649fd","Type":"ContainerStarted","Data":"336fad0380cfc7350a09dc43038fe7c555edef9bbfab1c837f98d6fc9edaff53"} Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.049468 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.231076 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6804a411-c895-4508-8262-c197f4e649fd-secret-volume\") pod \"6804a411-c895-4508-8262-c197f4e649fd\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.231209 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdtsg\" (UniqueName: \"kubernetes.io/projected/6804a411-c895-4508-8262-c197f4e649fd-kube-api-access-xdtsg\") pod \"6804a411-c895-4508-8262-c197f4e649fd\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.231490 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6804a411-c895-4508-8262-c197f4e649fd-config-volume\") pod \"6804a411-c895-4508-8262-c197f4e649fd\" (UID: \"6804a411-c895-4508-8262-c197f4e649fd\") " Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.232980 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6804a411-c895-4508-8262-c197f4e649fd-config-volume" (OuterVolumeSpecName: "config-volume") pod "6804a411-c895-4508-8262-c197f4e649fd" (UID: "6804a411-c895-4508-8262-c197f4e649fd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.240803 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6804a411-c895-4508-8262-c197f4e649fd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6804a411-c895-4508-8262-c197f4e649fd" (UID: "6804a411-c895-4508-8262-c197f4e649fd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.242133 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6804a411-c895-4508-8262-c197f4e649fd-kube-api-access-xdtsg" (OuterVolumeSpecName: "kube-api-access-xdtsg") pod "6804a411-c895-4508-8262-c197f4e649fd" (UID: "6804a411-c895-4508-8262-c197f4e649fd"). InnerVolumeSpecName "kube-api-access-xdtsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.335144 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6804a411-c895-4508-8262-c197f4e649fd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.335249 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6804a411-c895-4508-8262-c197f4e649fd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.335283 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdtsg\" (UniqueName: \"kubernetes.io/projected/6804a411-c895-4508-8262-c197f4e649fd-kube-api-access-xdtsg\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.690340 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" event={"ID":"6804a411-c895-4508-8262-c197f4e649fd","Type":"ContainerDied","Data":"336fad0380cfc7350a09dc43038fe7c555edef9bbfab1c837f98d6fc9edaff53"} Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.690415 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql" Nov 23 08:30:03 crc kubenswrapper[5028]: I1123 08:30:03.690433 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="336fad0380cfc7350a09dc43038fe7c555edef9bbfab1c837f98d6fc9edaff53" Nov 23 08:30:04 crc kubenswrapper[5028]: I1123 08:30:04.141002 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj"] Nov 23 08:30:04 crc kubenswrapper[5028]: I1123 08:30:04.149826 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398065-f2kbj"] Nov 23 08:30:05 crc kubenswrapper[5028]: I1123 08:30:05.066208 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f11abaa2-daf4-4fd2-a736-fa77aeae7977" path="/var/lib/kubelet/pods/f11abaa2-daf4-4fd2-a736-fa77aeae7977/volumes" Nov 23 08:30:24 crc kubenswrapper[5028]: I1123 08:30:24.794991 5028 scope.go:117] "RemoveContainer" containerID="a758ea1b07e92616b0fa449ee623a458cde3b234b1f04539092fb68b8718bdda" Nov 23 08:30:24 crc kubenswrapper[5028]: I1123 08:30:24.851042 5028 scope.go:117] "RemoveContainer" containerID="d0ef5c5d2057f912261ab15d1d85579091fdae3e615bb5878d560867db1b7c61" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.760689 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bd5vg"] Nov 23 08:30:28 crc kubenswrapper[5028]: E1123 08:30:28.762180 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6804a411-c895-4508-8262-c197f4e649fd" containerName="collect-profiles" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.762203 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6804a411-c895-4508-8262-c197f4e649fd" containerName="collect-profiles" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.762453 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6804a411-c895-4508-8262-c197f4e649fd" containerName="collect-profiles" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.764153 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.811480 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd5vg"] Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.851144 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwn6z\" (UniqueName: \"kubernetes.io/projected/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-kube-api-access-kwn6z\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.851469 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-catalog-content\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.851558 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-utilities\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.953510 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-utilities\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.953566 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-catalog-content\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.953695 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwn6z\" (UniqueName: \"kubernetes.io/projected/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-kube-api-access-kwn6z\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.954474 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-utilities\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.954598 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-catalog-content\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:28 crc kubenswrapper[5028]: I1123 08:30:28.982602 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwn6z\" (UniqueName: \"kubernetes.io/projected/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-kube-api-access-kwn6z\") pod \"community-operators-bd5vg\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:29 crc kubenswrapper[5028]: I1123 08:30:29.096827 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:29 crc kubenswrapper[5028]: I1123 08:30:29.614318 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bd5vg"] Nov 23 08:30:29 crc kubenswrapper[5028]: I1123 08:30:29.952795 5028 generic.go:334] "Generic (PLEG): container finished" podID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerID="6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f" exitCode=0 Nov 23 08:30:29 crc kubenswrapper[5028]: I1123 08:30:29.952874 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerDied","Data":"6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f"} Nov 23 08:30:29 crc kubenswrapper[5028]: I1123 08:30:29.952926 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerStarted","Data":"5340cab4b86b8586df50ce11b5ed917a3dcbbe9f606367c1f23bfb589cc9c134"} Nov 23 08:30:30 crc kubenswrapper[5028]: I1123 08:30:30.946532 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:30:30 crc kubenswrapper[5028]: I1123 08:30:30.947145 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:30:30 crc kubenswrapper[5028]: I1123 08:30:30.947223 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:30:30 crc kubenswrapper[5028]: I1123 08:30:30.948265 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:30:30 crc kubenswrapper[5028]: I1123 08:30:30.948352 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" gracePeriod=600 Nov 23 08:30:30 crc kubenswrapper[5028]: I1123 08:30:30.964997 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerStarted","Data":"df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c"} Nov 23 08:30:31 crc kubenswrapper[5028]: E1123 08:30:31.088499 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:30:31 crc kubenswrapper[5028]: I1123 08:30:31.976633 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" exitCode=0 Nov 23 08:30:31 crc kubenswrapper[5028]: I1123 08:30:31.976697 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423"} Nov 23 08:30:31 crc kubenswrapper[5028]: I1123 08:30:31.977189 5028 scope.go:117] "RemoveContainer" containerID="2348dae283de5d6de36acb1e4941d5d2419045a90922533d3a7b3763cf1cbc5e" Nov 23 08:30:31 crc kubenswrapper[5028]: I1123 08:30:31.978143 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:30:31 crc kubenswrapper[5028]: E1123 08:30:31.978434 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:30:31 crc kubenswrapper[5028]: I1123 08:30:31.981713 5028 generic.go:334] "Generic (PLEG): container finished" podID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerID="df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c" exitCode=0 Nov 23 08:30:31 crc kubenswrapper[5028]: I1123 08:30:31.981784 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerDied","Data":"df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c"} Nov 23 08:30:32 crc kubenswrapper[5028]: I1123 08:30:32.994017 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerStarted","Data":"9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064"} Nov 23 08:30:33 crc kubenswrapper[5028]: I1123 08:30:33.021870 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bd5vg" podStartSLOduration=2.590348722 podStartE2EDuration="5.02184761s" podCreationTimestamp="2025-11-23 08:30:28 +0000 UTC" firstStartedPulling="2025-11-23 08:30:29.955937236 +0000 UTC m=+6013.653342035" lastFinishedPulling="2025-11-23 08:30:32.387436144 +0000 UTC m=+6016.084840923" observedRunningTime="2025-11-23 08:30:33.021262556 +0000 UTC m=+6016.718667345" watchObservedRunningTime="2025-11-23 08:30:33.02184761 +0000 UTC m=+6016.719252399" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.120102 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27m97"] Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.123716 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.143547 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27m97"] Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.302395 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-catalog-content\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.302455 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhzd8\" (UniqueName: \"kubernetes.io/projected/90663dbe-acfd-4bb3-b7b4-c24b6c554266-kube-api-access-qhzd8\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.302507 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-utilities\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.403672 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-catalog-content\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.403730 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhzd8\" (UniqueName: \"kubernetes.io/projected/90663dbe-acfd-4bb3-b7b4-c24b6c554266-kube-api-access-qhzd8\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.403778 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-utilities\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.404210 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-catalog-content\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.404264 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-utilities\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.426740 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhzd8\" (UniqueName: \"kubernetes.io/projected/90663dbe-acfd-4bb3-b7b4-c24b6c554266-kube-api-access-qhzd8\") pod \"redhat-marketplace-27m97\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.460225 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:36 crc kubenswrapper[5028]: I1123 08:30:36.991038 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27m97"] Nov 23 08:30:37 crc kubenswrapper[5028]: I1123 08:30:37.047931 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27m97" event={"ID":"90663dbe-acfd-4bb3-b7b4-c24b6c554266","Type":"ContainerStarted","Data":"fad0db91eb34f4d7a07aea78b426d40c437328c23acbe08ea04ad0e2f033c78e"} Nov 23 08:30:38 crc kubenswrapper[5028]: I1123 08:30:38.064824 5028 generic.go:334] "Generic (PLEG): container finished" podID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerID="10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f" exitCode=0 Nov 23 08:30:38 crc kubenswrapper[5028]: I1123 08:30:38.065095 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27m97" event={"ID":"90663dbe-acfd-4bb3-b7b4-c24b6c554266","Type":"ContainerDied","Data":"10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f"} Nov 23 08:30:39 crc kubenswrapper[5028]: I1123 08:30:39.072885 5028 generic.go:334] "Generic (PLEG): container finished" podID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerID="a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f" exitCode=0 Nov 23 08:30:39 crc kubenswrapper[5028]: I1123 08:30:39.072980 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27m97" event={"ID":"90663dbe-acfd-4bb3-b7b4-c24b6c554266","Type":"ContainerDied","Data":"a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f"} Nov 23 08:30:39 crc kubenswrapper[5028]: I1123 08:30:39.098499 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:39 crc kubenswrapper[5028]: I1123 08:30:39.098599 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:39 crc kubenswrapper[5028]: I1123 08:30:39.156983 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:40 crc kubenswrapper[5028]: I1123 08:30:40.094370 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27m97" event={"ID":"90663dbe-acfd-4bb3-b7b4-c24b6c554266","Type":"ContainerStarted","Data":"e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8"} Nov 23 08:30:40 crc kubenswrapper[5028]: I1123 08:30:40.121673 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27m97" podStartSLOduration=2.464487364 podStartE2EDuration="4.121639445s" podCreationTimestamp="2025-11-23 08:30:36 +0000 UTC" firstStartedPulling="2025-11-23 08:30:38.069116974 +0000 UTC m=+6021.766521763" lastFinishedPulling="2025-11-23 08:30:39.726269055 +0000 UTC m=+6023.423673844" observedRunningTime="2025-11-23 08:30:40.11817078 +0000 UTC m=+6023.815575619" watchObservedRunningTime="2025-11-23 08:30:40.121639445 +0000 UTC m=+6023.819044234" Nov 23 08:30:40 crc kubenswrapper[5028]: I1123 08:30:40.150266 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:41 crc kubenswrapper[5028]: I1123 08:30:41.492184 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd5vg"] Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.113097 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bd5vg" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="registry-server" containerID="cri-o://9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064" gracePeriod=2 Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.585655 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.744792 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-utilities\") pod \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.744893 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-catalog-content\") pod \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.745061 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwn6z\" (UniqueName: \"kubernetes.io/projected/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-kube-api-access-kwn6z\") pod \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\" (UID: \"ddb5df4f-2986-4f36-9f6d-e6f5760916a0\") " Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.746421 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-utilities" (OuterVolumeSpecName: "utilities") pod "ddb5df4f-2986-4f36-9f6d-e6f5760916a0" (UID: "ddb5df4f-2986-4f36-9f6d-e6f5760916a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.756281 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-kube-api-access-kwn6z" (OuterVolumeSpecName: "kube-api-access-kwn6z") pod "ddb5df4f-2986-4f36-9f6d-e6f5760916a0" (UID: "ddb5df4f-2986-4f36-9f6d-e6f5760916a0"). InnerVolumeSpecName "kube-api-access-kwn6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.797742 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddb5df4f-2986-4f36-9f6d-e6f5760916a0" (UID: "ddb5df4f-2986-4f36-9f6d-e6f5760916a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.847760 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.847811 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:42 crc kubenswrapper[5028]: I1123 08:30:42.847825 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwn6z\" (UniqueName: \"kubernetes.io/projected/ddb5df4f-2986-4f36-9f6d-e6f5760916a0-kube-api-access-kwn6z\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.127131 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bd5vg" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.127678 5028 generic.go:334] "Generic (PLEG): container finished" podID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerID="9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064" exitCode=0 Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.127750 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerDied","Data":"9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064"} Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.127803 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bd5vg" event={"ID":"ddb5df4f-2986-4f36-9f6d-e6f5760916a0","Type":"ContainerDied","Data":"5340cab4b86b8586df50ce11b5ed917a3dcbbe9f606367c1f23bfb589cc9c134"} Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.127828 5028 scope.go:117] "RemoveContainer" containerID="9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.161304 5028 scope.go:117] "RemoveContainer" containerID="df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.172462 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bd5vg"] Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.184407 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bd5vg"] Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.193023 5028 scope.go:117] "RemoveContainer" containerID="6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.219545 5028 scope.go:117] "RemoveContainer" containerID="9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064" Nov 23 08:30:43 crc kubenswrapper[5028]: E1123 08:30:43.220220 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064\": container with ID starting with 9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064 not found: ID does not exist" containerID="9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.220267 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064"} err="failed to get container status \"9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064\": rpc error: code = NotFound desc = could not find container \"9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064\": container with ID starting with 9fbaf711739a913526a560817a80338470afe558e1fd82e95fae14667951c064 not found: ID does not exist" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.220295 5028 scope.go:117] "RemoveContainer" containerID="df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c" Nov 23 08:30:43 crc kubenswrapper[5028]: E1123 08:30:43.220795 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c\": container with ID starting with df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c not found: ID does not exist" containerID="df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.220862 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c"} err="failed to get container status \"df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c\": rpc error: code = NotFound desc = could not find container \"df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c\": container with ID starting with df8bd59f89c460bedfc22b4be489a83c645975530460231279d719c06148580c not found: ID does not exist" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.220903 5028 scope.go:117] "RemoveContainer" containerID="6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f" Nov 23 08:30:43 crc kubenswrapper[5028]: E1123 08:30:43.221332 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f\": container with ID starting with 6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f not found: ID does not exist" containerID="6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f" Nov 23 08:30:43 crc kubenswrapper[5028]: I1123 08:30:43.221459 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f"} err="failed to get container status \"6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f\": rpc error: code = NotFound desc = could not find container \"6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f\": container with ID starting with 6280fde8740943d9434b1e5e9013c519c4ce4fad07792c444181b7e71243275f not found: ID does not exist" Nov 23 08:30:45 crc kubenswrapper[5028]: I1123 08:30:45.053807 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:30:45 crc kubenswrapper[5028]: E1123 08:30:45.054750 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:30:45 crc kubenswrapper[5028]: I1123 08:30:45.063093 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" path="/var/lib/kubelet/pods/ddb5df4f-2986-4f36-9f6d-e6f5760916a0/volumes" Nov 23 08:30:46 crc kubenswrapper[5028]: I1123 08:30:46.461176 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:46 crc kubenswrapper[5028]: I1123 08:30:46.461251 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:46 crc kubenswrapper[5028]: I1123 08:30:46.518205 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:47 crc kubenswrapper[5028]: I1123 08:30:47.258626 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:47 crc kubenswrapper[5028]: I1123 08:30:47.310445 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27m97"] Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.209388 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27m97" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="registry-server" containerID="cri-o://e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8" gracePeriod=2 Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.738082 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.875232 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhzd8\" (UniqueName: \"kubernetes.io/projected/90663dbe-acfd-4bb3-b7b4-c24b6c554266-kube-api-access-qhzd8\") pod \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.875630 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-utilities\") pod \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.875771 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-catalog-content\") pod \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\" (UID: \"90663dbe-acfd-4bb3-b7b4-c24b6c554266\") " Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.876981 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-utilities" (OuterVolumeSpecName: "utilities") pod "90663dbe-acfd-4bb3-b7b4-c24b6c554266" (UID: "90663dbe-acfd-4bb3-b7b4-c24b6c554266"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.889205 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90663dbe-acfd-4bb3-b7b4-c24b6c554266-kube-api-access-qhzd8" (OuterVolumeSpecName: "kube-api-access-qhzd8") pod "90663dbe-acfd-4bb3-b7b4-c24b6c554266" (UID: "90663dbe-acfd-4bb3-b7b4-c24b6c554266"). InnerVolumeSpecName "kube-api-access-qhzd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.897026 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90663dbe-acfd-4bb3-b7b4-c24b6c554266" (UID: "90663dbe-acfd-4bb3-b7b4-c24b6c554266"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.978830 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.978871 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhzd8\" (UniqueName: \"kubernetes.io/projected/90663dbe-acfd-4bb3-b7b4-c24b6c554266-kube-api-access-qhzd8\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:49 crc kubenswrapper[5028]: I1123 08:30:49.978882 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90663dbe-acfd-4bb3-b7b4-c24b6c554266-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.223905 5028 generic.go:334] "Generic (PLEG): container finished" podID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerID="e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8" exitCode=0 Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.224259 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27m97" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.224295 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27m97" event={"ID":"90663dbe-acfd-4bb3-b7b4-c24b6c554266","Type":"ContainerDied","Data":"e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8"} Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.225360 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27m97" event={"ID":"90663dbe-acfd-4bb3-b7b4-c24b6c554266","Type":"ContainerDied","Data":"fad0db91eb34f4d7a07aea78b426d40c437328c23acbe08ea04ad0e2f033c78e"} Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.225397 5028 scope.go:117] "RemoveContainer" containerID="e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.279148 5028 scope.go:117] "RemoveContainer" containerID="a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.289724 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27m97"] Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.301309 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27m97"] Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.312756 5028 scope.go:117] "RemoveContainer" containerID="10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.348590 5028 scope.go:117] "RemoveContainer" containerID="e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8" Nov 23 08:30:50 crc kubenswrapper[5028]: E1123 08:30:50.349281 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8\": container with ID starting with e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8 not found: ID does not exist" containerID="e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.349394 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8"} err="failed to get container status \"e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8\": rpc error: code = NotFound desc = could not find container \"e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8\": container with ID starting with e3eb0c4f02ee5fd1443ee0617b8e05c3b5ca20ddbc1bcb94fd563330abf8daf8 not found: ID does not exist" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.349419 5028 scope.go:117] "RemoveContainer" containerID="a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f" Nov 23 08:30:50 crc kubenswrapper[5028]: E1123 08:30:50.349724 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f\": container with ID starting with a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f not found: ID does not exist" containerID="a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.349746 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f"} err="failed to get container status \"a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f\": rpc error: code = NotFound desc = could not find container \"a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f\": container with ID starting with a1fc7c300db72b26dbadac805d970475eb37e322d8d426e1993c14be3865ae2f not found: ID does not exist" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.349763 5028 scope.go:117] "RemoveContainer" containerID="10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f" Nov 23 08:30:50 crc kubenswrapper[5028]: E1123 08:30:50.350385 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f\": container with ID starting with 10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f not found: ID does not exist" containerID="10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f" Nov 23 08:30:50 crc kubenswrapper[5028]: I1123 08:30:50.350411 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f"} err="failed to get container status \"10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f\": rpc error: code = NotFound desc = could not find container \"10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f\": container with ID starting with 10eeb6692fc1ffc81f64df80e95f0ab950c9be3add9a4d14dca386979fed6f3f not found: ID does not exist" Nov 23 08:30:51 crc kubenswrapper[5028]: I1123 08:30:51.092577 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" path="/var/lib/kubelet/pods/90663dbe-acfd-4bb3-b7b4-c24b6c554266/volumes" Nov 23 08:30:57 crc kubenswrapper[5028]: I1123 08:30:57.061935 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:30:57 crc kubenswrapper[5028]: E1123 08:30:57.062801 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:31:11 crc kubenswrapper[5028]: I1123 08:31:11.053203 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:31:11 crc kubenswrapper[5028]: E1123 08:31:11.054182 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:31:23 crc kubenswrapper[5028]: I1123 08:31:23.053456 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:31:23 crc kubenswrapper[5028]: E1123 08:31:23.055137 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:31:36 crc kubenswrapper[5028]: I1123 08:31:36.052856 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:31:36 crc kubenswrapper[5028]: E1123 08:31:36.053817 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:31:48 crc kubenswrapper[5028]: I1123 08:31:48.054262 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:31:48 crc kubenswrapper[5028]: E1123 08:31:48.056100 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:31:59 crc kubenswrapper[5028]: I1123 08:31:59.054237 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:31:59 crc kubenswrapper[5028]: E1123 08:31:59.055696 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:32:10 crc kubenswrapper[5028]: I1123 08:32:10.053855 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:32:10 crc kubenswrapper[5028]: E1123 08:32:10.054847 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:32:22 crc kubenswrapper[5028]: I1123 08:32:22.054693 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:32:22 crc kubenswrapper[5028]: E1123 08:32:22.057794 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:32:35 crc kubenswrapper[5028]: I1123 08:32:35.053800 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:32:35 crc kubenswrapper[5028]: E1123 08:32:35.055066 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:32:46 crc kubenswrapper[5028]: I1123 08:32:46.054151 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:32:46 crc kubenswrapper[5028]: E1123 08:32:46.055541 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:32:58 crc kubenswrapper[5028]: I1123 08:32:58.053583 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:32:58 crc kubenswrapper[5028]: E1123 08:32:58.055031 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:33:09 crc kubenswrapper[5028]: I1123 08:33:09.054424 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:33:09 crc kubenswrapper[5028]: E1123 08:33:09.056678 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:33:24 crc kubenswrapper[5028]: I1123 08:33:24.053140 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:33:24 crc kubenswrapper[5028]: E1123 08:33:24.054466 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:33:25 crc kubenswrapper[5028]: I1123 08:33:25.002635 5028 scope.go:117] "RemoveContainer" containerID="8b04e9efa6a459eda72a77518e5451426073757d030df9355be503e349b70740" Nov 23 08:33:25 crc kubenswrapper[5028]: I1123 08:33:25.028552 5028 scope.go:117] "RemoveContainer" containerID="a752f1950966c72aa799602d29ed8178f4634f3c13326b149fa66c72494a6ee2" Nov 23 08:33:39 crc kubenswrapper[5028]: I1123 08:33:39.053865 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:33:39 crc kubenswrapper[5028]: E1123 08:33:39.054716 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:33:53 crc kubenswrapper[5028]: I1123 08:33:53.054422 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:33:53 crc kubenswrapper[5028]: E1123 08:33:53.055855 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:34:04 crc kubenswrapper[5028]: I1123 08:34:04.052730 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:34:04 crc kubenswrapper[5028]: E1123 08:34:04.053621 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:34:18 crc kubenswrapper[5028]: I1123 08:34:18.053997 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:34:18 crc kubenswrapper[5028]: E1123 08:34:18.054853 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:34:32 crc kubenswrapper[5028]: I1123 08:34:32.054313 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:34:32 crc kubenswrapper[5028]: E1123 08:34:32.056036 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:34:45 crc kubenswrapper[5028]: I1123 08:34:45.053690 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:34:45 crc kubenswrapper[5028]: E1123 08:34:45.055024 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:34:57 crc kubenswrapper[5028]: I1123 08:34:57.060982 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:34:57 crc kubenswrapper[5028]: E1123 08:34:57.062127 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:35:10 crc kubenswrapper[5028]: I1123 08:35:10.053172 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:35:10 crc kubenswrapper[5028]: E1123 08:35:10.054773 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:35:24 crc kubenswrapper[5028]: I1123 08:35:24.054084 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:35:24 crc kubenswrapper[5028]: E1123 08:35:24.057552 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:35:25 crc kubenswrapper[5028]: I1123 08:35:25.106018 5028 scope.go:117] "RemoveContainer" containerID="7a3401134dddae68aea6d40a3408bc7f449349371152a6dafa31049fc4539399" Nov 23 08:35:25 crc kubenswrapper[5028]: I1123 08:35:25.142906 5028 scope.go:117] "RemoveContainer" containerID="9b00e33f984eb578e31313c9942543e2f3062b5045f500b7d5387684f613a47d" Nov 23 08:35:25 crc kubenswrapper[5028]: I1123 08:35:25.197753 5028 scope.go:117] "RemoveContainer" containerID="7389fffdcafc993c3bae3d7cb88ddd29313470efaccfe8ae8d702219df14d608" Nov 23 08:35:25 crc kubenswrapper[5028]: I1123 08:35:25.255265 5028 scope.go:117] "RemoveContainer" containerID="2a0024e81be270f75d2126cc9e0877fff26cc8bdecab0bc380c00922736915b2" Nov 23 08:35:25 crc kubenswrapper[5028]: I1123 08:35:25.283624 5028 scope.go:117] "RemoveContainer" containerID="aa33e6e08c382afbb09686aaa0e28ff4b26fd22cb55c645ec42391f06900155d" Nov 23 08:35:25 crc kubenswrapper[5028]: I1123 08:35:25.313684 5028 scope.go:117] "RemoveContainer" containerID="464f20857948895ec170f6a8b23b48f30897665ac8a2d8434f02e45f585409f1" Nov 23 08:35:38 crc kubenswrapper[5028]: I1123 08:35:38.052905 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:35:39 crc kubenswrapper[5028]: I1123 08:35:39.284396 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"831fc7cc0754cd125280584b4847bfdb0bf23ab4a16c900638efc104572da6ce"} Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.147069 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mst26"] Nov 23 08:35:45 crc kubenswrapper[5028]: E1123 08:35:45.148004 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="extract-content" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148018 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="extract-content" Nov 23 08:35:45 crc kubenswrapper[5028]: E1123 08:35:45.148025 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="registry-server" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148035 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="registry-server" Nov 23 08:35:45 crc kubenswrapper[5028]: E1123 08:35:45.148054 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="extract-utilities" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148060 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="extract-utilities" Nov 23 08:35:45 crc kubenswrapper[5028]: E1123 08:35:45.148080 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="registry-server" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148086 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="registry-server" Nov 23 08:35:45 crc kubenswrapper[5028]: E1123 08:35:45.148099 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="extract-utilities" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148105 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="extract-utilities" Nov 23 08:35:45 crc kubenswrapper[5028]: E1123 08:35:45.148115 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="extract-content" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148121 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="extract-content" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148270 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb5df4f-2986-4f36-9f6d-e6f5760916a0" containerName="registry-server" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.148291 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="90663dbe-acfd-4bb3-b7b4-c24b6c554266" containerName="registry-server" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.149837 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.163004 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mst26"] Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.300938 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d74l\" (UniqueName: \"kubernetes.io/projected/808bd13d-0001-4817-a36a-f93dd79b7f9d-kube-api-access-8d74l\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.301509 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-catalog-content\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.301607 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-utilities\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.403397 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d74l\" (UniqueName: \"kubernetes.io/projected/808bd13d-0001-4817-a36a-f93dd79b7f9d-kube-api-access-8d74l\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.403524 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-catalog-content\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.403608 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-utilities\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.405483 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-utilities\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.405556 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-catalog-content\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.427369 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d74l\" (UniqueName: \"kubernetes.io/projected/808bd13d-0001-4817-a36a-f93dd79b7f9d-kube-api-access-8d74l\") pod \"redhat-operators-mst26\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.507118 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:45 crc kubenswrapper[5028]: I1123 08:35:45.973335 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mst26"] Nov 23 08:35:46 crc kubenswrapper[5028]: I1123 08:35:46.361184 5028 generic.go:334] "Generic (PLEG): container finished" podID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerID="762f9a81aaf150ae9378dc1c4c49156c7e0978752dc7e9e8975addc91dbb0348" exitCode=0 Nov 23 08:35:46 crc kubenswrapper[5028]: I1123 08:35:46.361257 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerDied","Data":"762f9a81aaf150ae9378dc1c4c49156c7e0978752dc7e9e8975addc91dbb0348"} Nov 23 08:35:46 crc kubenswrapper[5028]: I1123 08:35:46.361301 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerStarted","Data":"bd19f7e1cf3210eaab369c1f93e3383e67dff716050a20792b28b26932c0682e"} Nov 23 08:35:46 crc kubenswrapper[5028]: I1123 08:35:46.365652 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.350705 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5ks4"] Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.354206 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.362357 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5ks4"] Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.376664 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerStarted","Data":"601ae4c97695707c0c90a5f75d047921c13a003be6862b539454cbd598ff0208"} Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.439325 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtk2\" (UniqueName: \"kubernetes.io/projected/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-kube-api-access-5gtk2\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.439408 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-catalog-content\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.439455 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-utilities\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.543337 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-utilities\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.543473 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gtk2\" (UniqueName: \"kubernetes.io/projected/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-kube-api-access-5gtk2\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.543519 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-catalog-content\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.544131 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-catalog-content\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.544399 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-utilities\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.580384 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gtk2\" (UniqueName: \"kubernetes.io/projected/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-kube-api-access-5gtk2\") pod \"certified-operators-r5ks4\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:47 crc kubenswrapper[5028]: I1123 08:35:47.687907 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:48 crc kubenswrapper[5028]: I1123 08:35:48.203210 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5ks4"] Nov 23 08:35:48 crc kubenswrapper[5028]: W1123 08:35:48.208102 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f2abd2_2901_4e0d_aaf6_223e8cb839d8.slice/crio-0ecaac462f0f3dee9009e1516118193fd65bb88c5d7c9e9bea650614bfc0a751 WatchSource:0}: Error finding container 0ecaac462f0f3dee9009e1516118193fd65bb88c5d7c9e9bea650614bfc0a751: Status 404 returned error can't find the container with id 0ecaac462f0f3dee9009e1516118193fd65bb88c5d7c9e9bea650614bfc0a751 Nov 23 08:35:48 crc kubenswrapper[5028]: I1123 08:35:48.387366 5028 generic.go:334] "Generic (PLEG): container finished" podID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerID="601ae4c97695707c0c90a5f75d047921c13a003be6862b539454cbd598ff0208" exitCode=0 Nov 23 08:35:48 crc kubenswrapper[5028]: I1123 08:35:48.387498 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerDied","Data":"601ae4c97695707c0c90a5f75d047921c13a003be6862b539454cbd598ff0208"} Nov 23 08:35:48 crc kubenswrapper[5028]: I1123 08:35:48.391208 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerStarted","Data":"4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b"} Nov 23 08:35:48 crc kubenswrapper[5028]: I1123 08:35:48.391254 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerStarted","Data":"0ecaac462f0f3dee9009e1516118193fd65bb88c5d7c9e9bea650614bfc0a751"} Nov 23 08:35:49 crc kubenswrapper[5028]: I1123 08:35:49.404831 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerStarted","Data":"f4d1247954890ebc61c7d2f700df7b6111dae379e442eeb7abbd64edb5958a06"} Nov 23 08:35:49 crc kubenswrapper[5028]: I1123 08:35:49.407543 5028 generic.go:334] "Generic (PLEG): container finished" podID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerID="4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b" exitCode=0 Nov 23 08:35:49 crc kubenswrapper[5028]: I1123 08:35:49.407616 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerDied","Data":"4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b"} Nov 23 08:35:49 crc kubenswrapper[5028]: I1123 08:35:49.437730 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mst26" podStartSLOduration=1.978615108 podStartE2EDuration="4.437699602s" podCreationTimestamp="2025-11-23 08:35:45 +0000 UTC" firstStartedPulling="2025-11-23 08:35:46.365382261 +0000 UTC m=+6330.062787040" lastFinishedPulling="2025-11-23 08:35:48.824466755 +0000 UTC m=+6332.521871534" observedRunningTime="2025-11-23 08:35:49.428553968 +0000 UTC m=+6333.125958787" watchObservedRunningTime="2025-11-23 08:35:49.437699602 +0000 UTC m=+6333.135104421" Nov 23 08:35:51 crc kubenswrapper[5028]: I1123 08:35:51.439396 5028 generic.go:334] "Generic (PLEG): container finished" podID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerID="c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb" exitCode=0 Nov 23 08:35:51 crc kubenswrapper[5028]: I1123 08:35:51.440033 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerDied","Data":"c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb"} Nov 23 08:35:52 crc kubenswrapper[5028]: I1123 08:35:52.454186 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerStarted","Data":"a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d"} Nov 23 08:35:52 crc kubenswrapper[5028]: I1123 08:35:52.493090 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5ks4" podStartSLOduration=3.021859319 podStartE2EDuration="5.49305555s" podCreationTimestamp="2025-11-23 08:35:47 +0000 UTC" firstStartedPulling="2025-11-23 08:35:49.409249956 +0000 UTC m=+6333.106654735" lastFinishedPulling="2025-11-23 08:35:51.880446177 +0000 UTC m=+6335.577850966" observedRunningTime="2025-11-23 08:35:52.486550561 +0000 UTC m=+6336.183955340" watchObservedRunningTime="2025-11-23 08:35:52.49305555 +0000 UTC m=+6336.190460339" Nov 23 08:35:55 crc kubenswrapper[5028]: I1123 08:35:55.508029 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:55 crc kubenswrapper[5028]: I1123 08:35:55.508572 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:55 crc kubenswrapper[5028]: I1123 08:35:55.561675 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:56 crc kubenswrapper[5028]: I1123 08:35:56.528397 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:35:56 crc kubenswrapper[5028]: I1123 08:35:56.930205 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mst26"] Nov 23 08:35:57 crc kubenswrapper[5028]: I1123 08:35:57.688547 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:57 crc kubenswrapper[5028]: I1123 08:35:57.688628 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:57 crc kubenswrapper[5028]: I1123 08:35:57.753507 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:58 crc kubenswrapper[5028]: I1123 08:35:58.507362 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mst26" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="registry-server" containerID="cri-o://f4d1247954890ebc61c7d2f700df7b6111dae379e442eeb7abbd64edb5958a06" gracePeriod=2 Nov 23 08:35:58 crc kubenswrapper[5028]: I1123 08:35:58.562258 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:35:59 crc kubenswrapper[5028]: I1123 08:35:59.331546 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5ks4"] Nov 23 08:35:59 crc kubenswrapper[5028]: I1123 08:35:59.518400 5028 generic.go:334] "Generic (PLEG): container finished" podID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerID="f4d1247954890ebc61c7d2f700df7b6111dae379e442eeb7abbd64edb5958a06" exitCode=0 Nov 23 08:35:59 crc kubenswrapper[5028]: I1123 08:35:59.518481 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerDied","Data":"f4d1247954890ebc61c7d2f700df7b6111dae379e442eeb7abbd64edb5958a06"} Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.147139 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.184538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-utilities\") pod \"808bd13d-0001-4817-a36a-f93dd79b7f9d\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.184646 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d74l\" (UniqueName: \"kubernetes.io/projected/808bd13d-0001-4817-a36a-f93dd79b7f9d-kube-api-access-8d74l\") pod \"808bd13d-0001-4817-a36a-f93dd79b7f9d\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.184720 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-catalog-content\") pod \"808bd13d-0001-4817-a36a-f93dd79b7f9d\" (UID: \"808bd13d-0001-4817-a36a-f93dd79b7f9d\") " Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.185858 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-utilities" (OuterVolumeSpecName: "utilities") pod "808bd13d-0001-4817-a36a-f93dd79b7f9d" (UID: "808bd13d-0001-4817-a36a-f93dd79b7f9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.194815 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808bd13d-0001-4817-a36a-f93dd79b7f9d-kube-api-access-8d74l" (OuterVolumeSpecName: "kube-api-access-8d74l") pod "808bd13d-0001-4817-a36a-f93dd79b7f9d" (UID: "808bd13d-0001-4817-a36a-f93dd79b7f9d"). InnerVolumeSpecName "kube-api-access-8d74l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.283513 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "808bd13d-0001-4817-a36a-f93dd79b7f9d" (UID: "808bd13d-0001-4817-a36a-f93dd79b7f9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.289790 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.289848 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d74l\" (UniqueName: \"kubernetes.io/projected/808bd13d-0001-4817-a36a-f93dd79b7f9d-kube-api-access-8d74l\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.289865 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808bd13d-0001-4817-a36a-f93dd79b7f9d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.529695 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mst26" event={"ID":"808bd13d-0001-4817-a36a-f93dd79b7f9d","Type":"ContainerDied","Data":"bd19f7e1cf3210eaab369c1f93e3383e67dff716050a20792b28b26932c0682e"} Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.529744 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mst26" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.529783 5028 scope.go:117] "RemoveContainer" containerID="f4d1247954890ebc61c7d2f700df7b6111dae379e442eeb7abbd64edb5958a06" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.531050 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5ks4" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="registry-server" containerID="cri-o://a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d" gracePeriod=2 Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.568766 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mst26"] Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.571111 5028 scope.go:117] "RemoveContainer" containerID="601ae4c97695707c0c90a5f75d047921c13a003be6862b539454cbd598ff0208" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.576274 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mst26"] Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.597409 5028 scope.go:117] "RemoveContainer" containerID="762f9a81aaf150ae9378dc1c4c49156c7e0978752dc7e9e8975addc91dbb0348" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.870633 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.903785 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-catalog-content\") pod \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.904035 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-utilities\") pod \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.904098 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gtk2\" (UniqueName: \"kubernetes.io/projected/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-kube-api-access-5gtk2\") pod \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\" (UID: \"00f2abd2-2901-4e0d-aaf6-223e8cb839d8\") " Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.904807 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-utilities" (OuterVolumeSpecName: "utilities") pod "00f2abd2-2901-4e0d-aaf6-223e8cb839d8" (UID: "00f2abd2-2901-4e0d-aaf6-223e8cb839d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:00 crc kubenswrapper[5028]: I1123 08:36:00.908153 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-kube-api-access-5gtk2" (OuterVolumeSpecName: "kube-api-access-5gtk2") pod "00f2abd2-2901-4e0d-aaf6-223e8cb839d8" (UID: "00f2abd2-2901-4e0d-aaf6-223e8cb839d8"). InnerVolumeSpecName "kube-api-access-5gtk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.005907 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.005964 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gtk2\" (UniqueName: \"kubernetes.io/projected/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-kube-api-access-5gtk2\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.070145 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" path="/var/lib/kubelet/pods/808bd13d-0001-4817-a36a-f93dd79b7f9d/volumes" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.178175 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00f2abd2-2901-4e0d-aaf6-223e8cb839d8" (UID: "00f2abd2-2901-4e0d-aaf6-223e8cb839d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.210188 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f2abd2-2901-4e0d-aaf6-223e8cb839d8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.540030 5028 generic.go:334] "Generic (PLEG): container finished" podID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerID="a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d" exitCode=0 Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.540078 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerDied","Data":"a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d"} Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.540103 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5ks4" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.540127 5028 scope.go:117] "RemoveContainer" containerID="a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.540114 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5ks4" event={"ID":"00f2abd2-2901-4e0d-aaf6-223e8cb839d8","Type":"ContainerDied","Data":"0ecaac462f0f3dee9009e1516118193fd65bb88c5d7c9e9bea650614bfc0a751"} Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.556660 5028 scope.go:117] "RemoveContainer" containerID="c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.573662 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5ks4"] Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.578733 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5ks4"] Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.592472 5028 scope.go:117] "RemoveContainer" containerID="4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.612434 5028 scope.go:117] "RemoveContainer" containerID="a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d" Nov 23 08:36:01 crc kubenswrapper[5028]: E1123 08:36:01.613043 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d\": container with ID starting with a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d not found: ID does not exist" containerID="a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.613092 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d"} err="failed to get container status \"a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d\": rpc error: code = NotFound desc = could not find container \"a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d\": container with ID starting with a857a1ec6c771e2e2eea85aa0df1630ffbd8bd4d9cad1cecb4f0767b51d8b87d not found: ID does not exist" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.613127 5028 scope.go:117] "RemoveContainer" containerID="c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb" Nov 23 08:36:01 crc kubenswrapper[5028]: E1123 08:36:01.613594 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb\": container with ID starting with c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb not found: ID does not exist" containerID="c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.613632 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb"} err="failed to get container status \"c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb\": rpc error: code = NotFound desc = could not find container \"c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb\": container with ID starting with c4eddd94389afe810bf0babad77682aee5a53925b35d4153930a1d446a7d04fb not found: ID does not exist" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.613662 5028 scope.go:117] "RemoveContainer" containerID="4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b" Nov 23 08:36:01 crc kubenswrapper[5028]: E1123 08:36:01.613992 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b\": container with ID starting with 4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b not found: ID does not exist" containerID="4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b" Nov 23 08:36:01 crc kubenswrapper[5028]: I1123 08:36:01.614016 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b"} err="failed to get container status \"4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b\": rpc error: code = NotFound desc = could not find container \"4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b\": container with ID starting with 4f8bf8fff64cb415df27dc5ec46312d0944bcd6d27a44abfa2c0497a6031185b not found: ID does not exist" Nov 23 08:36:03 crc kubenswrapper[5028]: I1123 08:36:03.066456 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" path="/var/lib/kubelet/pods/00f2abd2-2901-4e0d-aaf6-223e8cb839d8/volumes" Nov 23 08:38:00 crc kubenswrapper[5028]: I1123 08:38:00.946243 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:38:00 crc kubenswrapper[5028]: I1123 08:38:00.948248 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:38:30 crc kubenswrapper[5028]: I1123 08:38:30.946428 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:38:30 crc kubenswrapper[5028]: I1123 08:38:30.947396 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.562996 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 08:38:49 crc kubenswrapper[5028]: E1123 08:38:49.564611 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="registry-server" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.564640 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="registry-server" Nov 23 08:38:49 crc kubenswrapper[5028]: E1123 08:38:49.564677 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="extract-content" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.564693 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="extract-content" Nov 23 08:38:49 crc kubenswrapper[5028]: E1123 08:38:49.564724 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="extract-utilities" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.564738 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="extract-utilities" Nov 23 08:38:49 crc kubenswrapper[5028]: E1123 08:38:49.564767 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="registry-server" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.564780 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="registry-server" Nov 23 08:38:49 crc kubenswrapper[5028]: E1123 08:38:49.564803 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="extract-utilities" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.564816 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="extract-utilities" Nov 23 08:38:49 crc kubenswrapper[5028]: E1123 08:38:49.564833 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="extract-content" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.564846 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="extract-content" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.566102 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f2abd2-2901-4e0d-aaf6-223e8cb839d8" containerName="registry-server" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.566224 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="808bd13d-0001-4817-a36a-f93dd79b7f9d" containerName="registry-server" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.567431 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.575768 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jz44m" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.580487 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.677770 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-096c63b8-41eb-4c75-9660-004bbe65182f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.679186 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ck9h\" (UniqueName: \"kubernetes.io/projected/db7dc982-3c93-4b4a-a2f0-f74c5509fd77-kube-api-access-5ck9h\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.781673 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ck9h\" (UniqueName: \"kubernetes.io/projected/db7dc982-3c93-4b4a-a2f0-f74c5509fd77-kube-api-access-5ck9h\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.781979 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-096c63b8-41eb-4c75-9660-004bbe65182f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.788043 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.788112 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-096c63b8-41eb-4c75-9660-004bbe65182f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f9c13bba0ae34c5950c68cf99945da93eed0e5db5c76d49a95a8049fb411cf9c/globalmount\"" pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.806174 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ck9h\" (UniqueName: \"kubernetes.io/projected/db7dc982-3c93-4b4a-a2f0-f74c5509fd77-kube-api-access-5ck9h\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.840830 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-096c63b8-41eb-4c75-9660-004bbe65182f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") pod \"mariadb-copy-data\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " pod="openstack/mariadb-copy-data" Nov 23 08:38:49 crc kubenswrapper[5028]: I1123 08:38:49.952916 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 08:38:50 crc kubenswrapper[5028]: I1123 08:38:50.334643 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 08:38:51 crc kubenswrapper[5028]: I1123 08:38:51.333015 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"db7dc982-3c93-4b4a-a2f0-f74c5509fd77","Type":"ContainerStarted","Data":"67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555"} Nov 23 08:38:51 crc kubenswrapper[5028]: I1123 08:38:51.333837 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"db7dc982-3c93-4b4a-a2f0-f74c5509fd77","Type":"ContainerStarted","Data":"126fcafad558672dae76cacba097b694268ac3cce46fba63f720685679c200c5"} Nov 23 08:38:51 crc kubenswrapper[5028]: I1123 08:38:51.371350 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.371312582 podStartE2EDuration="3.371312582s" podCreationTimestamp="2025-11-23 08:38:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:38:51.352361909 +0000 UTC m=+6515.049766718" watchObservedRunningTime="2025-11-23 08:38:51.371312582 +0000 UTC m=+6515.068717401" Nov 23 08:38:53 crc kubenswrapper[5028]: E1123 08:38:53.470486 5028 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:54908->38.102.83.145:39767: write tcp 38.102.83.145:54908->38.102.83.145:39767: write: broken pipe Nov 23 08:38:54 crc kubenswrapper[5028]: I1123 08:38:54.844014 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:54 crc kubenswrapper[5028]: I1123 08:38:54.849092 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:38:54 crc kubenswrapper[5028]: I1123 08:38:54.888825 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:54 crc kubenswrapper[5028]: I1123 08:38:54.893126 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grvfc\" (UniqueName: \"kubernetes.io/projected/ce697ecc-3161-4cd5-95c3-66cc4d44db20-kube-api-access-grvfc\") pod \"mariadb-client\" (UID: \"ce697ecc-3161-4cd5-95c3-66cc4d44db20\") " pod="openstack/mariadb-client" Nov 23 08:38:54 crc kubenswrapper[5028]: I1123 08:38:54.995500 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grvfc\" (UniqueName: \"kubernetes.io/projected/ce697ecc-3161-4cd5-95c3-66cc4d44db20-kube-api-access-grvfc\") pod \"mariadb-client\" (UID: \"ce697ecc-3161-4cd5-95c3-66cc4d44db20\") " pod="openstack/mariadb-client" Nov 23 08:38:55 crc kubenswrapper[5028]: I1123 08:38:55.019105 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grvfc\" (UniqueName: \"kubernetes.io/projected/ce697ecc-3161-4cd5-95c3-66cc4d44db20-kube-api-access-grvfc\") pod \"mariadb-client\" (UID: \"ce697ecc-3161-4cd5-95c3-66cc4d44db20\") " pod="openstack/mariadb-client" Nov 23 08:38:55 crc kubenswrapper[5028]: I1123 08:38:55.189009 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:38:55 crc kubenswrapper[5028]: I1123 08:38:55.718698 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:55 crc kubenswrapper[5028]: W1123 08:38:55.731243 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce697ecc_3161_4cd5_95c3_66cc4d44db20.slice/crio-32a748a0b3f4c93d442631bd67f1bcde0449c9378016df6d98bb4c229c6d234c WatchSource:0}: Error finding container 32a748a0b3f4c93d442631bd67f1bcde0449c9378016df6d98bb4c229c6d234c: Status 404 returned error can't find the container with id 32a748a0b3f4c93d442631bd67f1bcde0449c9378016df6d98bb4c229c6d234c Nov 23 08:38:56 crc kubenswrapper[5028]: I1123 08:38:56.386051 5028 generic.go:334] "Generic (PLEG): container finished" podID="ce697ecc-3161-4cd5-95c3-66cc4d44db20" containerID="2c60705715c668f7a5bb9de5fcafff5ce67e7caef599faca9140a006a9ae1080" exitCode=0 Nov 23 08:38:56 crc kubenswrapper[5028]: I1123 08:38:56.386201 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ce697ecc-3161-4cd5-95c3-66cc4d44db20","Type":"ContainerDied","Data":"2c60705715c668f7a5bb9de5fcafff5ce67e7caef599faca9140a006a9ae1080"} Nov 23 08:38:56 crc kubenswrapper[5028]: I1123 08:38:56.386589 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"ce697ecc-3161-4cd5-95c3-66cc4d44db20","Type":"ContainerStarted","Data":"32a748a0b3f4c93d442631bd67f1bcde0449c9378016df6d98bb4c229c6d234c"} Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.744533 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.782136 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_ce697ecc-3161-4cd5-95c3-66cc4d44db20/mariadb-client/0.log" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.813803 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.823256 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.850314 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grvfc\" (UniqueName: \"kubernetes.io/projected/ce697ecc-3161-4cd5-95c3-66cc4d44db20-kube-api-access-grvfc\") pod \"ce697ecc-3161-4cd5-95c3-66cc4d44db20\" (UID: \"ce697ecc-3161-4cd5-95c3-66cc4d44db20\") " Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.864246 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce697ecc-3161-4cd5-95c3-66cc4d44db20-kube-api-access-grvfc" (OuterVolumeSpecName: "kube-api-access-grvfc") pod "ce697ecc-3161-4cd5-95c3-66cc4d44db20" (UID: "ce697ecc-3161-4cd5-95c3-66cc4d44db20"). InnerVolumeSpecName "kube-api-access-grvfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.953843 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grvfc\" (UniqueName: \"kubernetes.io/projected/ce697ecc-3161-4cd5-95c3-66cc4d44db20-kube-api-access-grvfc\") on node \"crc\" DevicePath \"\"" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.997001 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:57 crc kubenswrapper[5028]: E1123 08:38:57.997504 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce697ecc-3161-4cd5-95c3-66cc4d44db20" containerName="mariadb-client" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.997548 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce697ecc-3161-4cd5-95c3-66cc4d44db20" containerName="mariadb-client" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.997817 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce697ecc-3161-4cd5-95c3-66cc4d44db20" containerName="mariadb-client" Nov 23 08:38:57 crc kubenswrapper[5028]: I1123 08:38:57.998520 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.004548 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.063994 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttmr\" (UniqueName: \"kubernetes.io/projected/b224de47-358d-4624-9391-dc2a8b55d836-kube-api-access-cttmr\") pod \"mariadb-client\" (UID: \"b224de47-358d-4624-9391-dc2a8b55d836\") " pod="openstack/mariadb-client" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.167350 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cttmr\" (UniqueName: \"kubernetes.io/projected/b224de47-358d-4624-9391-dc2a8b55d836-kube-api-access-cttmr\") pod \"mariadb-client\" (UID: \"b224de47-358d-4624-9391-dc2a8b55d836\") " pod="openstack/mariadb-client" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.186203 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cttmr\" (UniqueName: \"kubernetes.io/projected/b224de47-358d-4624-9391-dc2a8b55d836-kube-api-access-cttmr\") pod \"mariadb-client\" (UID: \"b224de47-358d-4624-9391-dc2a8b55d836\") " pod="openstack/mariadb-client" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.364896 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.411756 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a748a0b3f4c93d442631bd67f1bcde0449c9378016df6d98bb4c229c6d234c" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.411802 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.460580 5028 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="ce697ecc-3161-4cd5-95c3-66cc4d44db20" podUID="b224de47-358d-4624-9391-dc2a8b55d836" Nov 23 08:38:58 crc kubenswrapper[5028]: I1123 08:38:58.630982 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:38:59 crc kubenswrapper[5028]: I1123 08:38:59.065882 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce697ecc-3161-4cd5-95c3-66cc4d44db20" path="/var/lib/kubelet/pods/ce697ecc-3161-4cd5-95c3-66cc4d44db20/volumes" Nov 23 08:38:59 crc kubenswrapper[5028]: I1123 08:38:59.425553 5028 generic.go:334] "Generic (PLEG): container finished" podID="b224de47-358d-4624-9391-dc2a8b55d836" containerID="6af1ca7a58c090bc5d411a79291d29d47bb6b5693740f77206b7f86dee167568" exitCode=0 Nov 23 08:38:59 crc kubenswrapper[5028]: I1123 08:38:59.425636 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b224de47-358d-4624-9391-dc2a8b55d836","Type":"ContainerDied","Data":"6af1ca7a58c090bc5d411a79291d29d47bb6b5693740f77206b7f86dee167568"} Nov 23 08:38:59 crc kubenswrapper[5028]: I1123 08:38:59.425682 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b224de47-358d-4624-9391-dc2a8b55d836","Type":"ContainerStarted","Data":"f72cd59a790e2a895856159af973dccbb481c6b9cebc453d179cf722498f0280"} Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.806986 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.827833 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_b224de47-358d-4624-9391-dc2a8b55d836/mariadb-client/0.log" Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.857411 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.862519 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.929416 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cttmr\" (UniqueName: \"kubernetes.io/projected/b224de47-358d-4624-9391-dc2a8b55d836-kube-api-access-cttmr\") pod \"b224de47-358d-4624-9391-dc2a8b55d836\" (UID: \"b224de47-358d-4624-9391-dc2a8b55d836\") " Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.937480 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b224de47-358d-4624-9391-dc2a8b55d836-kube-api-access-cttmr" (OuterVolumeSpecName: "kube-api-access-cttmr") pod "b224de47-358d-4624-9391-dc2a8b55d836" (UID: "b224de47-358d-4624-9391-dc2a8b55d836"). InnerVolumeSpecName "kube-api-access-cttmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.946009 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.946103 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.946185 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.947249 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"831fc7cc0754cd125280584b4847bfdb0bf23ab4a16c900638efc104572da6ce"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:39:00 crc kubenswrapper[5028]: I1123 08:39:00.947340 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://831fc7cc0754cd125280584b4847bfdb0bf23ab4a16c900638efc104572da6ce" gracePeriod=600 Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.032015 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cttmr\" (UniqueName: \"kubernetes.io/projected/b224de47-358d-4624-9391-dc2a8b55d836-kube-api-access-cttmr\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.064487 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b224de47-358d-4624-9391-dc2a8b55d836" path="/var/lib/kubelet/pods/b224de47-358d-4624-9391-dc2a8b55d836/volumes" Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.448237 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="831fc7cc0754cd125280584b4847bfdb0bf23ab4a16c900638efc104572da6ce" exitCode=0 Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.448355 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"831fc7cc0754cd125280584b4847bfdb0bf23ab4a16c900638efc104572da6ce"} Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.450501 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146"} Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.450553 5028 scope.go:117] "RemoveContainer" containerID="22042e396eb1bd5087efb82e303b2e0c473bf9b6a8446e9594039c241ca7e423" Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.456804 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 23 08:39:01 crc kubenswrapper[5028]: I1123 08:39:01.492547 5028 scope.go:117] "RemoveContainer" containerID="6af1ca7a58c090bc5d411a79291d29d47bb6b5693740f77206b7f86dee167568" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.487860 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 08:39:32 crc kubenswrapper[5028]: E1123 08:39:32.489229 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b224de47-358d-4624-9391-dc2a8b55d836" containerName="mariadb-client" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.489248 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b224de47-358d-4624-9391-dc2a8b55d836" containerName="mariadb-client" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.489477 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b224de47-358d-4624-9391-dc2a8b55d836" containerName="mariadb-client" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.490749 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.495496 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.495556 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9hvqs" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.496371 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.516054 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.531459 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.533232 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.541385 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.544289 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.556082 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.563024 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.654684 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655029 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/183bb332-8d63-4a1e-bce1-d739b4924f4a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655152 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/791bb325-d8f6-48bc-8b4d-1fca822131f9-config\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655240 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bb332-8d63-4a1e-bce1-d739b4924f4a-config\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655328 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655410 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655506 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwdh\" (UniqueName: \"kubernetes.io/projected/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-kube-api-access-zhwdh\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655590 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655838 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791bb325-d8f6-48bc-8b4d-1fca822131f9-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.655934 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/791bb325-d8f6-48bc-8b4d-1fca822131f9-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.656110 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-config\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.656219 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzz9z\" (UniqueName: \"kubernetes.io/projected/183bb332-8d63-4a1e-bce1-d739b4924f4a-kube-api-access-fzz9z\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.656309 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.656513 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvcg\" (UniqueName: \"kubernetes.io/projected/791bb325-d8f6-48bc-8b4d-1fca822131f9-kube-api-access-zsvcg\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.656861 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/791bb325-d8f6-48bc-8b4d-1fca822131f9-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.656941 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.657053 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/183bb332-8d63-4a1e-bce1-d739b4924f4a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.657206 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183bb332-8d63-4a1e-bce1-d739b4924f4a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.675893 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.684152 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.705585 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9xhbk" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.705745 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.706003 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.706809 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.733625 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.740440 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.753902 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.758088 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761315 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19116d9f-b4aa-4c04-9e25-35535d32165a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761407 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791bb325-d8f6-48bc-8b4d-1fca822131f9-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761447 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/791bb325-d8f6-48bc-8b4d-1fca822131f9-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761482 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19116d9f-b4aa-4c04-9e25-35535d32165a-config\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761509 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-config\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761559 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzz9z\" (UniqueName: \"kubernetes.io/projected/183bb332-8d63-4a1e-bce1-d739b4924f4a-kube-api-access-fzz9z\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761606 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4k5\" (UniqueName: \"kubernetes.io/projected/19116d9f-b4aa-4c04-9e25-35535d32165a-kube-api-access-nf4k5\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761719 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761804 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19116d9f-b4aa-4c04-9e25-35535d32165a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761869 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.761906 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvcg\" (UniqueName: \"kubernetes.io/projected/791bb325-d8f6-48bc-8b4d-1fca822131f9-kube-api-access-zsvcg\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762002 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/791bb325-d8f6-48bc-8b4d-1fca822131f9-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762063 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762150 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19116d9f-b4aa-4c04-9e25-35535d32165a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762206 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/183bb332-8d63-4a1e-bce1-d739b4924f4a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762249 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183bb332-8d63-4a1e-bce1-d739b4924f4a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762282 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762308 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/183bb332-8d63-4a1e-bce1-d739b4924f4a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762334 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/791bb325-d8f6-48bc-8b4d-1fca822131f9-config\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762367 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bb332-8d63-4a1e-bce1-d739b4924f4a-config\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762407 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762448 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762480 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhwdh\" (UniqueName: \"kubernetes.io/projected/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-kube-api-access-zhwdh\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.762519 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.763301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/791bb325-d8f6-48bc-8b4d-1fca822131f9-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.763793 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-config\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.764400 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.764602 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.764700 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/791bb325-d8f6-48bc-8b4d-1fca822131f9-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.765092 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/183bb332-8d63-4a1e-bce1-d739b4924f4a-config\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.766410 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/183bb332-8d63-4a1e-bce1-d739b4924f4a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.766827 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/791bb325-d8f6-48bc-8b4d-1fca822131f9-config\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.768207 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.768604 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/436a39c8738bc6aef6e8e70a41b82e0affe3a2a9a46bfe6df0f55a87211aca39/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.768345 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.768703 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6da587f41b1eccd6f715281d703b7f46a6b203cd8676a1d6c38412e64cffa1ef/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.770663 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183bb332-8d63-4a1e-bce1-d739b4924f4a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.773318 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.775021 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.775311 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe700b1831ca8f498f3ce170e2e0ee6f83567a7a84600a0d43210b0bb42ef996/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.781242 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/791bb325-d8f6-48bc-8b4d-1fca822131f9-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.781346 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.782134 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/183bb332-8d63-4a1e-bce1-d739b4924f4a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.786540 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvcg\" (UniqueName: \"kubernetes.io/projected/791bb325-d8f6-48bc-8b4d-1fca822131f9-kube-api-access-zsvcg\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.787064 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzz9z\" (UniqueName: \"kubernetes.io/projected/183bb332-8d63-4a1e-bce1-d739b4924f4a-kube-api-access-fzz9z\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.794304 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhwdh\" (UniqueName: \"kubernetes.io/projected/ff3b9990-4fd4-4e2c-bff3-2717ec516b89-kube-api-access-zhwdh\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.794423 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.816596 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e323df1e-627b-4e38-9fb4-f04e62c34190\") pod \"ovsdbserver-nb-1\" (UID: \"183bb332-8d63-4a1e-bce1-d739b4924f4a\") " pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.820403 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b296c4b-a884-46f7-ac5e-f655e3b0dae7\") pod \"ovsdbserver-nb-2\" (UID: \"791bb325-d8f6-48bc-8b4d-1fca822131f9\") " pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.822168 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-35187bc1-e7b8-4cf4-88db-c769ce8bdbdb\") pod \"ovsdbserver-nb-0\" (UID: \"ff3b9990-4fd4-4e2c-bff3-2717ec516b89\") " pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.863886 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864316 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58dc09f-661c-4742-8b98-c92a2ce35664-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864377 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4k5\" (UniqueName: \"kubernetes.io/projected/19116d9f-b4aa-4c04-9e25-35535d32165a-kube-api-access-nf4k5\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864500 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58dc09f-661c-4742-8b98-c92a2ce35664-config\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864547 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58dc09f-661c-4742-8b98-c92a2ce35664-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864653 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5hc\" (UniqueName: \"kubernetes.io/projected/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-kube-api-access-2q5hc\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864716 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b58dc09f-661c-4742-8b98-c92a2ce35664-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864750 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b45fc97-995b-4411-befb-04bde2433d35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b45fc97-995b-4411-befb-04bde2433d35\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864833 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864877 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19116d9f-b4aa-4c04-9e25-35535d32165a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864908 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864960 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19116d9f-b4aa-4c04-9e25-35535d32165a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.864998 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68bz5\" (UniqueName: \"kubernetes.io/projected/b58dc09f-661c-4742-8b98-c92a2ce35664-kube-api-access-68bz5\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.865035 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-config\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.865058 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.865101 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.865156 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.865185 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19116d9f-b4aa-4c04-9e25-35535d32165a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.865211 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19116d9f-b4aa-4c04-9e25-35535d32165a-config\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.866285 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19116d9f-b4aa-4c04-9e25-35535d32165a-config\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.866691 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19116d9f-b4aa-4c04-9e25-35535d32165a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.866817 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19116d9f-b4aa-4c04-9e25-35535d32165a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.867237 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.867289 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/885e4bad3f9c43a145be5f0e4334b083a657282ac06acf913575a0d2a2667c69/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.868809 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19116d9f-b4aa-4c04-9e25-35535d32165a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.878412 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.885291 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4k5\" (UniqueName: \"kubernetes.io/projected/19116d9f-b4aa-4c04-9e25-35535d32165a-kube-api-access-nf4k5\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.913299 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e4d0500-62d3-4910-8f19-4de2203a24ff\") pod \"ovsdbserver-sb-0\" (UID: \"19116d9f-b4aa-4c04-9e25-35535d32165a\") " pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967502 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58dc09f-661c-4742-8b98-c92a2ce35664-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967589 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58dc09f-661c-4742-8b98-c92a2ce35664-config\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967615 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58dc09f-661c-4742-8b98-c92a2ce35664-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967644 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5hc\" (UniqueName: \"kubernetes.io/projected/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-kube-api-access-2q5hc\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967669 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b58dc09f-661c-4742-8b98-c92a2ce35664-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967694 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b45fc97-995b-4411-befb-04bde2433d35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b45fc97-995b-4411-befb-04bde2433d35\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967731 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967822 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68bz5\" (UniqueName: \"kubernetes.io/projected/b58dc09f-661c-4742-8b98-c92a2ce35664-kube-api-access-68bz5\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967859 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-config\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967883 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967928 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.967961 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.971774 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-config\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.973263 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b58dc09f-661c-4742-8b98-c92a2ce35664-config\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.974200 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b58dc09f-661c-4742-8b98-c92a2ce35664-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.974616 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58dc09f-661c-4742-8b98-c92a2ce35664-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.975775 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.976004 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.976054 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b45fc97-995b-4411-befb-04bde2433d35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b45fc97-995b-4411-befb-04bde2433d35\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/690779140192abc224f1a88033d35b23ceb2c731110201f8b48b0616019544c3/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.977193 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58dc09f-661c-4742-8b98-c92a2ce35664-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.988550 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.988595 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4151501f151d3786d3c7661ea86e5a0a2198807f017f11165fae415c8032cacb/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.993343 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.994244 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:32 crc kubenswrapper[5028]: I1123 08:39:32.994902 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5hc\" (UniqueName: \"kubernetes.io/projected/3d7e60b0-7a1f-49b9-aeaf-19c92d93008d-kube-api-access-2q5hc\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.000931 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68bz5\" (UniqueName: \"kubernetes.io/projected/b58dc09f-661c-4742-8b98-c92a2ce35664-kube-api-access-68bz5\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.029465 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b45fc97-995b-4411-befb-04bde2433d35\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b45fc97-995b-4411-befb-04bde2433d35\") pod \"ovsdbserver-sb-2\" (UID: \"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d\") " pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.038633 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ba52c0f-7de3-4eae-b3cb-3c533204f6c7\") pod \"ovsdbserver-sb-1\" (UID: \"b58dc09f-661c-4742-8b98-c92a2ce35664\") " pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.059612 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.118210 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.145807 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.154650 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.425772 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.532816 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.647780 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.750203 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 23 08:39:33 crc kubenswrapper[5028]: W1123 08:39:33.761277 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff3b9990_4fd4_4e2c_bff3_2717ec516b89.slice/crio-a769183d799f13360b8b1d22ce007311d52d365eab22aeb0a19516840d850c31 WatchSource:0}: Error finding container a769183d799f13360b8b1d22ce007311d52d365eab22aeb0a19516840d850c31: Status 404 returned error can't find the container with id a769183d799f13360b8b1d22ce007311d52d365eab22aeb0a19516840d850c31 Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.795273 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ff3b9990-4fd4-4e2c-bff3-2717ec516b89","Type":"ContainerStarted","Data":"a769183d799f13360b8b1d22ce007311d52d365eab22aeb0a19516840d850c31"} Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.797903 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"183bb332-8d63-4a1e-bce1-d739b4924f4a","Type":"ContainerStarted","Data":"11a1231cf069a91750688ffc668bffc4365991f3199d28037448a7def8a5279d"} Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.800644 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"791bb325-d8f6-48bc-8b4d-1fca822131f9","Type":"ContainerStarted","Data":"93b17328a3714cc278177bb07a3f27db3e81bec6b76088a1e174e6320ac9ce0f"} Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.803988 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"19116d9f-b4aa-4c04-9e25-35535d32165a","Type":"ContainerStarted","Data":"c2615fc99de024a87b8dc185f874c07ea1ad0a2e05cfbb3c69218fccddded441"} Nov 23 08:39:33 crc kubenswrapper[5028]: I1123 08:39:33.857099 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 23 08:39:33 crc kubenswrapper[5028]: W1123 08:39:33.872055 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb58dc09f_661c_4742_8b98_c92a2ce35664.slice/crio-3521e8db49308f292424b98c0a1ebd3ddcbd1fb8ff495059c5ba683e115983a6 WatchSource:0}: Error finding container 3521e8db49308f292424b98c0a1ebd3ddcbd1fb8ff495059c5ba683e115983a6: Status 404 returned error can't find the container with id 3521e8db49308f292424b98c0a1ebd3ddcbd1fb8ff495059c5ba683e115983a6 Nov 23 08:39:34 crc kubenswrapper[5028]: I1123 08:39:34.386659 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 23 08:39:34 crc kubenswrapper[5028]: W1123 08:39:34.401321 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d7e60b0_7a1f_49b9_aeaf_19c92d93008d.slice/crio-7a764ed3e18577326502c308f5a7043f9e33d5eefe43628395509a587503a998 WatchSource:0}: Error finding container 7a764ed3e18577326502c308f5a7043f9e33d5eefe43628395509a587503a998: Status 404 returned error can't find the container with id 7a764ed3e18577326502c308f5a7043f9e33d5eefe43628395509a587503a998 Nov 23 08:39:34 crc kubenswrapper[5028]: I1123 08:39:34.818127 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"b58dc09f-661c-4742-8b98-c92a2ce35664","Type":"ContainerStarted","Data":"3521e8db49308f292424b98c0a1ebd3ddcbd1fb8ff495059c5ba683e115983a6"} Nov 23 08:39:34 crc kubenswrapper[5028]: I1123 08:39:34.823574 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d","Type":"ContainerStarted","Data":"7a764ed3e18577326502c308f5a7043f9e33d5eefe43628395509a587503a998"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.874370 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"19116d9f-b4aa-4c04-9e25-35535d32165a","Type":"ContainerStarted","Data":"879c23780556fafc93f7b3367a753fae7006a153e6c48f2a636e8e2b774abb72"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.875311 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"19116d9f-b4aa-4c04-9e25-35535d32165a","Type":"ContainerStarted","Data":"8ea450926b7e2f8bf1ba27faf6c8a550abb782d00d8d040c2fe83fb1c8a21fe9"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.879757 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ff3b9990-4fd4-4e2c-bff3-2717ec516b89","Type":"ContainerStarted","Data":"5c336e283ffe7d744d121d3bdf063c57765152ad45b4910a12d3b9fa97ee6654"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.879831 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ff3b9990-4fd4-4e2c-bff3-2717ec516b89","Type":"ContainerStarted","Data":"ebf2e0a24cb8825403598e3621110f4cf86502e750dfa006781d342f8ba559e6"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.882836 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"183bb332-8d63-4a1e-bce1-d739b4924f4a","Type":"ContainerStarted","Data":"8cbee784cffd931ceffe9c48ca051b0ae177ab99accce2ec6b39a4f1119e62ca"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.882886 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"183bb332-8d63-4a1e-bce1-d739b4924f4a","Type":"ContainerStarted","Data":"bdc81dbfffcac574dfdb176e479c0f6c12a7a69bf2c561739d1da3de2c31a4a7"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.884803 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d","Type":"ContainerStarted","Data":"ed74988cb12dc2aa709fc06de398c572219112c29429b6ae8727f8c6670719eb"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.884874 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"3d7e60b0-7a1f-49b9-aeaf-19c92d93008d","Type":"ContainerStarted","Data":"44d56f9b3cf9952ee6373a7d2d6e73f06b6a8d8ee59eb30f2d0c9acc1278a8c2"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.887427 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"791bb325-d8f6-48bc-8b4d-1fca822131f9","Type":"ContainerStarted","Data":"2acea9a9a4e9e378a7a5bf87ba6b23db99fed24b77dbf1ab970cd26525bab5ba"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.887488 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"791bb325-d8f6-48bc-8b4d-1fca822131f9","Type":"ContainerStarted","Data":"ca497f93042c26124fdd5dacb31c844dd72c22f2b2e0c01502e8b4dbaf0ec174"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.891470 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"b58dc09f-661c-4742-8b98-c92a2ce35664","Type":"ContainerStarted","Data":"d6f20f6c3668695007821e26c7dceb30778e527aa11e76ee6fffa6c2b20ab7d2"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.891514 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"b58dc09f-661c-4742-8b98-c92a2ce35664","Type":"ContainerStarted","Data":"b1c2f6c8a9d81f110f521fa62e2bc631d1cca4dcd585e33984c0c86e80360a75"} Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.904870 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.357319438 podStartE2EDuration="7.904846639s" podCreationTimestamp="2025-11-23 08:39:31 +0000 UTC" firstStartedPulling="2025-11-23 08:39:33.659920661 +0000 UTC m=+6557.357325440" lastFinishedPulling="2025-11-23 08:39:38.207447862 +0000 UTC m=+6561.904852641" observedRunningTime="2025-11-23 08:39:38.902175143 +0000 UTC m=+6562.599579922" watchObservedRunningTime="2025-11-23 08:39:38.904846639 +0000 UTC m=+6562.602251418" Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.926571 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.83868384 podStartE2EDuration="7.92655094s" podCreationTimestamp="2025-11-23 08:39:31 +0000 UTC" firstStartedPulling="2025-11-23 08:39:33.875543544 +0000 UTC m=+6557.572948323" lastFinishedPulling="2025-11-23 08:39:37.963410644 +0000 UTC m=+6561.660815423" observedRunningTime="2025-11-23 08:39:38.92289129 +0000 UTC m=+6562.620296079" watchObservedRunningTime="2025-11-23 08:39:38.92655094 +0000 UTC m=+6562.623955719" Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.948896 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.372973977 podStartE2EDuration="7.948864065s" podCreationTimestamp="2025-11-23 08:39:31 +0000 UTC" firstStartedPulling="2025-11-23 08:39:34.406352077 +0000 UTC m=+6558.103756856" lastFinishedPulling="2025-11-23 08:39:37.982242165 +0000 UTC m=+6561.679646944" observedRunningTime="2025-11-23 08:39:38.939304222 +0000 UTC m=+6562.636709001" watchObservedRunningTime="2025-11-23 08:39:38.948864065 +0000 UTC m=+6562.646268844" Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.963509 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.433157242 podStartE2EDuration="7.963481913s" podCreationTimestamp="2025-11-23 08:39:31 +0000 UTC" firstStartedPulling="2025-11-23 08:39:33.433334969 +0000 UTC m=+6557.130739748" lastFinishedPulling="2025-11-23 08:39:37.96365964 +0000 UTC m=+6561.661064419" observedRunningTime="2025-11-23 08:39:38.962096379 +0000 UTC m=+6562.659501158" watchObservedRunningTime="2025-11-23 08:39:38.963481913 +0000 UTC m=+6562.660886712" Nov 23 08:39:38 crc kubenswrapper[5028]: I1123 08:39:38.988642 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.585861057 podStartE2EDuration="7.988621058s" podCreationTimestamp="2025-11-23 08:39:31 +0000 UTC" firstStartedPulling="2025-11-23 08:39:33.563220496 +0000 UTC m=+6557.260625275" lastFinishedPulling="2025-11-23 08:39:37.965980497 +0000 UTC m=+6561.663385276" observedRunningTime="2025-11-23 08:39:38.983436481 +0000 UTC m=+6562.680841260" watchObservedRunningTime="2025-11-23 08:39:38.988621058 +0000 UTC m=+6562.686025827" Nov 23 08:39:39 crc kubenswrapper[5028]: I1123 08:39:39.007524 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.5875428080000002 podStartE2EDuration="8.007496069s" podCreationTimestamp="2025-11-23 08:39:31 +0000 UTC" firstStartedPulling="2025-11-23 08:39:33.76577958 +0000 UTC m=+6557.463184359" lastFinishedPulling="2025-11-23 08:39:38.185732841 +0000 UTC m=+6561.883137620" observedRunningTime="2025-11-23 08:39:39.002978269 +0000 UTC m=+6562.700383038" watchObservedRunningTime="2025-11-23 08:39:39.007496069 +0000 UTC m=+6562.704900848" Nov 23 08:39:39 crc kubenswrapper[5028]: I1123 08:39:39.064049 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:39 crc kubenswrapper[5028]: I1123 08:39:39.119249 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:39 crc kubenswrapper[5028]: I1123 08:39:39.147766 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:39 crc kubenswrapper[5028]: I1123 08:39:39.156190 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:41 crc kubenswrapper[5028]: I1123 08:39:41.864915 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:41 crc kubenswrapper[5028]: I1123 08:39:41.879461 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:41 crc kubenswrapper[5028]: I1123 08:39:41.910559 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:41 crc kubenswrapper[5028]: I1123 08:39:41.919464 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:41 crc kubenswrapper[5028]: I1123 08:39:41.924010 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:41 crc kubenswrapper[5028]: I1123 08:39:41.924297 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.110808 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.111590 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.165346 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.165841 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.200105 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.200635 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.211762 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:42 crc kubenswrapper[5028]: I1123 08:39:42.212509 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.121743 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.168292 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.233107 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.235767 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.395726 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8fcf5f7-8vwtl"] Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.397194 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.399991 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.424705 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8fcf5f7-8vwtl"] Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.503143 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvw6\" (UniqueName: \"kubernetes.io/projected/2fa438c8-dd88-4edd-80a2-f943cb952071-kube-api-access-9fvw6\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.503212 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-dns-svc\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.503239 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-ovsdbserver-sb\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.503640 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-config\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.537323 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fcf5f7-8vwtl"] Nov 23 08:39:43 crc kubenswrapper[5028]: E1123 08:39:43.538162 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-9fvw6 ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" podUID="2fa438c8-dd88-4edd-80a2-f943cb952071" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.577838 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-657df5f45c-s72r9"] Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.579534 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.587591 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.594123 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657df5f45c-s72r9"] Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.605478 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvw6\" (UniqueName: \"kubernetes.io/projected/2fa438c8-dd88-4edd-80a2-f943cb952071-kube-api-access-9fvw6\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.605546 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-dns-svc\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.605577 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-ovsdbserver-sb\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.605665 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-config\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.606865 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-config\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.607494 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-ovsdbserver-sb\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.608321 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-dns-svc\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.636025 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvw6\" (UniqueName: \"kubernetes.io/projected/2fa438c8-dd88-4edd-80a2-f943cb952071-kube-api-access-9fvw6\") pod \"dnsmasq-dns-8fcf5f7-8vwtl\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.707364 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-config\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.707408 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-dns-svc\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.707437 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-nb\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.707705 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthjg\" (UniqueName: \"kubernetes.io/projected/5cf4c47e-bde8-4d99-accf-2f7383405c64-kube-api-access-fthjg\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.707811 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-sb\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.810241 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-config\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.810299 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-dns-svc\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.810328 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-nb\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.810378 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fthjg\" (UniqueName: \"kubernetes.io/projected/5cf4c47e-bde8-4d99-accf-2f7383405c64-kube-api-access-fthjg\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.810402 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-sb\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.811257 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-config\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.811858 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-nb\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.811900 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-sb\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.812432 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-dns-svc\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.840934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fthjg\" (UniqueName: \"kubernetes.io/projected/5cf4c47e-bde8-4d99-accf-2f7383405c64-kube-api-access-fthjg\") pod \"dnsmasq-dns-657df5f45c-s72r9\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.903448 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:43 crc kubenswrapper[5028]: I1123 08:39:43.969232 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.045898 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.217135 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-config\") pod \"2fa438c8-dd88-4edd-80a2-f943cb952071\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.217219 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-dns-svc\") pod \"2fa438c8-dd88-4edd-80a2-f943cb952071\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.217388 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fvw6\" (UniqueName: \"kubernetes.io/projected/2fa438c8-dd88-4edd-80a2-f943cb952071-kube-api-access-9fvw6\") pod \"2fa438c8-dd88-4edd-80a2-f943cb952071\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.217464 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-ovsdbserver-sb\") pod \"2fa438c8-dd88-4edd-80a2-f943cb952071\" (UID: \"2fa438c8-dd88-4edd-80a2-f943cb952071\") " Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.219058 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-config" (OuterVolumeSpecName: "config") pod "2fa438c8-dd88-4edd-80a2-f943cb952071" (UID: "2fa438c8-dd88-4edd-80a2-f943cb952071"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.220063 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2fa438c8-dd88-4edd-80a2-f943cb952071" (UID: "2fa438c8-dd88-4edd-80a2-f943cb952071"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.220755 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2fa438c8-dd88-4edd-80a2-f943cb952071" (UID: "2fa438c8-dd88-4edd-80a2-f943cb952071"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.226483 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa438c8-dd88-4edd-80a2-f943cb952071-kube-api-access-9fvw6" (OuterVolumeSpecName: "kube-api-access-9fvw6") pod "2fa438c8-dd88-4edd-80a2-f943cb952071" (UID: "2fa438c8-dd88-4edd-80a2-f943cb952071"). InnerVolumeSpecName "kube-api-access-9fvw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.319435 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.319510 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fvw6\" (UniqueName: \"kubernetes.io/projected/2fa438c8-dd88-4edd-80a2-f943cb952071-kube-api-access-9fvw6\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.319525 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.319541 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa438c8-dd88-4edd-80a2-f943cb952071-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.493248 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-657df5f45c-s72r9"] Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.980431 5028 generic.go:334] "Generic (PLEG): container finished" podID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerID="8dad258f23fe121485ad698ac9cb22c9e9eee3ee6ae67f1f9289e619b4f75b77" exitCode=0 Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.980484 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" event={"ID":"5cf4c47e-bde8-4d99-accf-2f7383405c64","Type":"ContainerDied","Data":"8dad258f23fe121485ad698ac9cb22c9e9eee3ee6ae67f1f9289e619b4f75b77"} Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.980537 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8fcf5f7-8vwtl" Nov 23 08:39:44 crc kubenswrapper[5028]: I1123 08:39:44.980535 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" event={"ID":"5cf4c47e-bde8-4d99-accf-2f7383405c64","Type":"ContainerStarted","Data":"05ba855a990c4a21253b5964f0e9f5b8407a6439996ca59f0525775f09e08231"} Nov 23 08:39:45 crc kubenswrapper[5028]: I1123 08:39:45.203863 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8fcf5f7-8vwtl"] Nov 23 08:39:45 crc kubenswrapper[5028]: I1123 08:39:45.209529 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8fcf5f7-8vwtl"] Nov 23 08:39:45 crc kubenswrapper[5028]: I1123 08:39:45.993718 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" event={"ID":"5cf4c47e-bde8-4d99-accf-2f7383405c64","Type":"ContainerStarted","Data":"54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3"} Nov 23 08:39:45 crc kubenswrapper[5028]: I1123 08:39:45.995418 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:46 crc kubenswrapper[5028]: I1123 08:39:46.026750 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" podStartSLOduration=3.026704592 podStartE2EDuration="3.026704592s" podCreationTimestamp="2025-11-23 08:39:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:39:46.017759474 +0000 UTC m=+6569.715164263" watchObservedRunningTime="2025-11-23 08:39:46.026704592 +0000 UTC m=+6569.724109421" Nov 23 08:39:47 crc kubenswrapper[5028]: I1123 08:39:47.063219 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa438c8-dd88-4edd-80a2-f943cb952071" path="/var/lib/kubelet/pods/2fa438c8-dd88-4edd-80a2-f943cb952071/volumes" Nov 23 08:39:47 crc kubenswrapper[5028]: I1123 08:39:47.906325 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 23 08:39:47 crc kubenswrapper[5028]: I1123 08:39:47.921049 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 23 08:39:50 crc kubenswrapper[5028]: I1123 08:39:50.896054 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 23 08:39:50 crc kubenswrapper[5028]: I1123 08:39:50.899284 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 08:39:50 crc kubenswrapper[5028]: I1123 08:39:50.901708 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 23 08:39:50 crc kubenswrapper[5028]: I1123 08:39:50.904847 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.070803 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.070866 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvp6f\" (UniqueName: \"kubernetes.io/projected/11216e20-1103-4fa8-b4fb-df9556d9114b-kube-api-access-hvp6f\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.070907 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/11216e20-1103-4fa8-b4fb-df9556d9114b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.178567 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/11216e20-1103-4fa8-b4fb-df9556d9114b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.179096 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.179188 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvp6f\" (UniqueName: \"kubernetes.io/projected/11216e20-1103-4fa8-b4fb-df9556d9114b-kube-api-access-hvp6f\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.184883 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.184913 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fb95e04d72a6d9f3e5a2db3488ac44f9ca7c9bbf1bbdf32dd43f9cfaad550c9a/globalmount\"" pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.198105 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/11216e20-1103-4fa8-b4fb-df9556d9114b-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.205837 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvp6f\" (UniqueName: \"kubernetes.io/projected/11216e20-1103-4fa8-b4fb-df9556d9114b-kube-api-access-hvp6f\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.227176 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") pod \"ovn-copy-data\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.260393 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 08:39:51 crc kubenswrapper[5028]: I1123 08:39:51.989237 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 08:39:52 crc kubenswrapper[5028]: I1123 08:39:52.053034 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"11216e20-1103-4fa8-b4fb-df9556d9114b","Type":"ContainerStarted","Data":"aa318fc1e873714931c593ccfb5b35b87760de0cc849ac54e0cfb70bf058a48f"} Nov 23 08:39:53 crc kubenswrapper[5028]: I1123 08:39:53.061893 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"11216e20-1103-4fa8-b4fb-df9556d9114b","Type":"ContainerStarted","Data":"61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198"} Nov 23 08:39:53 crc kubenswrapper[5028]: I1123 08:39:53.086624 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.87964869 podStartE2EDuration="4.086596331s" podCreationTimestamp="2025-11-23 08:39:49 +0000 UTC" firstStartedPulling="2025-11-23 08:39:51.997863304 +0000 UTC m=+6575.695268083" lastFinishedPulling="2025-11-23 08:39:52.204810945 +0000 UTC m=+6575.902215724" observedRunningTime="2025-11-23 08:39:53.078851852 +0000 UTC m=+6576.776256631" watchObservedRunningTime="2025-11-23 08:39:53.086596331 +0000 UTC m=+6576.784001110" Nov 23 08:39:53 crc kubenswrapper[5028]: I1123 08:39:53.904830 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:39:53 crc kubenswrapper[5028]: I1123 08:39:53.966290 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-97464f77-bbg8w"] Nov 23 08:39:53 crc kubenswrapper[5028]: I1123 08:39:53.966617 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-97464f77-bbg8w" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="dnsmasq-dns" containerID="cri-o://66673be3ae205d3ec01a1b153e3be50a726b37c86bbc27b3f6c57ce8a0a3138d" gracePeriod=10 Nov 23 08:39:54 crc kubenswrapper[5028]: I1123 08:39:54.621837 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-97464f77-bbg8w" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.252:5353: connect: connection refused" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.095261 5028 generic.go:334] "Generic (PLEG): container finished" podID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerID="66673be3ae205d3ec01a1b153e3be50a726b37c86bbc27b3f6c57ce8a0a3138d" exitCode=0 Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.095344 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-bbg8w" event={"ID":"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1","Type":"ContainerDied","Data":"66673be3ae205d3ec01a1b153e3be50a726b37c86bbc27b3f6c57ce8a0a3138d"} Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.374313 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.460818 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-config\") pod \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.460911 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxdw4\" (UniqueName: \"kubernetes.io/projected/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-kube-api-access-nxdw4\") pod \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.461083 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-dns-svc\") pod \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\" (UID: \"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1\") " Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.494282 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-kube-api-access-nxdw4" (OuterVolumeSpecName: "kube-api-access-nxdw4") pod "56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" (UID: "56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1"). InnerVolumeSpecName "kube-api-access-nxdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.559993 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" (UID: "56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.563573 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.563628 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxdw4\" (UniqueName: \"kubernetes.io/projected/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-kube-api-access-nxdw4\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.582723 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-config" (OuterVolumeSpecName: "config") pod "56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" (UID: "56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:39:55 crc kubenswrapper[5028]: I1123 08:39:55.666091 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:39:56 crc kubenswrapper[5028]: I1123 08:39:56.119667 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97464f77-bbg8w" event={"ID":"56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1","Type":"ContainerDied","Data":"7262beffe2fed47919296086399ebbc48ef6a788949a92018b0c85306f74bf28"} Nov 23 08:39:56 crc kubenswrapper[5028]: I1123 08:39:56.119733 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97464f77-bbg8w" Nov 23 08:39:56 crc kubenswrapper[5028]: I1123 08:39:56.119760 5028 scope.go:117] "RemoveContainer" containerID="66673be3ae205d3ec01a1b153e3be50a726b37c86bbc27b3f6c57ce8a0a3138d" Nov 23 08:39:56 crc kubenswrapper[5028]: I1123 08:39:56.150771 5028 scope.go:117] "RemoveContainer" containerID="bdfb3715d97016fb532bb718b7fc1c32fe4c02ba2a7a86266b6d3a6d9f8bac43" Nov 23 08:39:56 crc kubenswrapper[5028]: I1123 08:39:56.164650 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-97464f77-bbg8w"] Nov 23 08:39:56 crc kubenswrapper[5028]: I1123 08:39:56.174313 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-97464f77-bbg8w"] Nov 23 08:39:57 crc kubenswrapper[5028]: I1123 08:39:57.070386 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" path="/var/lib/kubelet/pods/56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1/volumes" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.071311 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 23 08:40:01 crc kubenswrapper[5028]: E1123 08:40:01.072053 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="init" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.072069 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="init" Nov 23 08:40:01 crc kubenswrapper[5028]: E1123 08:40:01.072093 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="dnsmasq-dns" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.072099 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="dnsmasq-dns" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.072265 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ae7d4a-099d-49c4-9cc4-e3f11d4ee1c1" containerName="dnsmasq-dns" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.073259 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.077361 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.078064 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bpbh2" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.078507 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.091919 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.173880 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb69d04b-e20e-4411-bc0c-27a11ea44707-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.173962 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb69d04b-e20e-4411-bc0c-27a11ea44707-scripts\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.174016 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb69d04b-e20e-4411-bc0c-27a11ea44707-config\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.174135 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb69d04b-e20e-4411-bc0c-27a11ea44707-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.174205 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shm5b\" (UniqueName: \"kubernetes.io/projected/bb69d04b-e20e-4411-bc0c-27a11ea44707-kube-api-access-shm5b\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.276207 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb69d04b-e20e-4411-bc0c-27a11ea44707-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.276300 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shm5b\" (UniqueName: \"kubernetes.io/projected/bb69d04b-e20e-4411-bc0c-27a11ea44707-kube-api-access-shm5b\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.276346 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb69d04b-e20e-4411-bc0c-27a11ea44707-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.276374 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb69d04b-e20e-4411-bc0c-27a11ea44707-scripts\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.276408 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb69d04b-e20e-4411-bc0c-27a11ea44707-config\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.276735 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bb69d04b-e20e-4411-bc0c-27a11ea44707-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.277329 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb69d04b-e20e-4411-bc0c-27a11ea44707-config\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.277360 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb69d04b-e20e-4411-bc0c-27a11ea44707-scripts\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.284697 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb69d04b-e20e-4411-bc0c-27a11ea44707-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.294409 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shm5b\" (UniqueName: \"kubernetes.io/projected/bb69d04b-e20e-4411-bc0c-27a11ea44707-kube-api-access-shm5b\") pod \"ovn-northd-0\" (UID: \"bb69d04b-e20e-4411-bc0c-27a11ea44707\") " pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.443671 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 23 08:40:01 crc kubenswrapper[5028]: I1123 08:40:01.886021 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 23 08:40:02 crc kubenswrapper[5028]: I1123 08:40:02.182761 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bb69d04b-e20e-4411-bc0c-27a11ea44707","Type":"ContainerStarted","Data":"d874a7cd92453d58a371b8f8b9125e4cba2557d0af013679b61ce0fbbca43542"} Nov 23 08:40:03 crc kubenswrapper[5028]: I1123 08:40:03.197695 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bb69d04b-e20e-4411-bc0c-27a11ea44707","Type":"ContainerStarted","Data":"1d15409ea1b5b1269f856bdd81c56f4dc7966d7d041a0c822c325c2484caa7b5"} Nov 23 08:40:03 crc kubenswrapper[5028]: I1123 08:40:03.197757 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"bb69d04b-e20e-4411-bc0c-27a11ea44707","Type":"ContainerStarted","Data":"dc83672972ed90ad4f352879f369a85e8011cd95fcf6a58eabae4b645fa3187e"} Nov 23 08:40:03 crc kubenswrapper[5028]: I1123 08:40:03.197880 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 23 08:40:03 crc kubenswrapper[5028]: I1123 08:40:03.230609 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.503728512 podStartE2EDuration="2.230583189s" podCreationTimestamp="2025-11-23 08:40:01 +0000 UTC" firstStartedPulling="2025-11-23 08:40:01.917609298 +0000 UTC m=+6585.615014077" lastFinishedPulling="2025-11-23 08:40:02.644463975 +0000 UTC m=+6586.341868754" observedRunningTime="2025-11-23 08:40:03.217873948 +0000 UTC m=+6586.915278717" watchObservedRunningTime="2025-11-23 08:40:03.230583189 +0000 UTC m=+6586.927987978" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.014182 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bxk6x"] Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.016631 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.022826 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-aad8-account-create-v8929"] Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.024881 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.048129 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.066090 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-aad8-account-create-v8929"] Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.085338 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bxk6x"] Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.133991 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17dc3da-8776-40a5-a2a3-2f86ae78be13-operator-scripts\") pod \"keystone-aad8-account-create-v8929\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.134048 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcbdn\" (UniqueName: \"kubernetes.io/projected/d17dc3da-8776-40a5-a2a3-2f86ae78be13-kube-api-access-hcbdn\") pod \"keystone-aad8-account-create-v8929\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.134072 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xns4\" (UniqueName: \"kubernetes.io/projected/ed78771e-329e-47a4-be0e-fa85fb5eba7d-kube-api-access-6xns4\") pod \"keystone-db-create-bxk6x\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.134201 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed78771e-329e-47a4-be0e-fa85fb5eba7d-operator-scripts\") pod \"keystone-db-create-bxk6x\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.236303 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed78771e-329e-47a4-be0e-fa85fb5eba7d-operator-scripts\") pod \"keystone-db-create-bxk6x\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.236393 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17dc3da-8776-40a5-a2a3-2f86ae78be13-operator-scripts\") pod \"keystone-aad8-account-create-v8929\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.236425 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcbdn\" (UniqueName: \"kubernetes.io/projected/d17dc3da-8776-40a5-a2a3-2f86ae78be13-kube-api-access-hcbdn\") pod \"keystone-aad8-account-create-v8929\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.236447 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xns4\" (UniqueName: \"kubernetes.io/projected/ed78771e-329e-47a4-be0e-fa85fb5eba7d-kube-api-access-6xns4\") pod \"keystone-db-create-bxk6x\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.237560 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17dc3da-8776-40a5-a2a3-2f86ae78be13-operator-scripts\") pod \"keystone-aad8-account-create-v8929\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.237640 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed78771e-329e-47a4-be0e-fa85fb5eba7d-operator-scripts\") pod \"keystone-db-create-bxk6x\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.259991 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xns4\" (UniqueName: \"kubernetes.io/projected/ed78771e-329e-47a4-be0e-fa85fb5eba7d-kube-api-access-6xns4\") pod \"keystone-db-create-bxk6x\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.260252 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcbdn\" (UniqueName: \"kubernetes.io/projected/d17dc3da-8776-40a5-a2a3-2f86ae78be13-kube-api-access-hcbdn\") pod \"keystone-aad8-account-create-v8929\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.339707 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.356769 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.884748 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-aad8-account-create-v8929"] Nov 23 08:40:09 crc kubenswrapper[5028]: I1123 08:40:09.895290 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bxk6x"] Nov 23 08:40:10 crc kubenswrapper[5028]: I1123 08:40:10.316008 5028 generic.go:334] "Generic (PLEG): container finished" podID="d17dc3da-8776-40a5-a2a3-2f86ae78be13" containerID="1ae03e844110d1f0d034ee22c23cc9bbb74ab6eb8e74b89078a11b3f04b481fb" exitCode=0 Nov 23 08:40:10 crc kubenswrapper[5028]: I1123 08:40:10.316118 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-aad8-account-create-v8929" event={"ID":"d17dc3da-8776-40a5-a2a3-2f86ae78be13","Type":"ContainerDied","Data":"1ae03e844110d1f0d034ee22c23cc9bbb74ab6eb8e74b89078a11b3f04b481fb"} Nov 23 08:40:10 crc kubenswrapper[5028]: I1123 08:40:10.316830 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-aad8-account-create-v8929" event={"ID":"d17dc3da-8776-40a5-a2a3-2f86ae78be13","Type":"ContainerStarted","Data":"40452b71ea570b8504906783af80780db663be3bd1ff4b7f07dcb0b70ad85114"} Nov 23 08:40:10 crc kubenswrapper[5028]: I1123 08:40:10.323508 5028 generic.go:334] "Generic (PLEG): container finished" podID="ed78771e-329e-47a4-be0e-fa85fb5eba7d" containerID="d7f4d40679c6d26d630b5ccad11aecb6777f98fd4715bd8e6c54e47a06c617f5" exitCode=0 Nov 23 08:40:10 crc kubenswrapper[5028]: I1123 08:40:10.323571 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bxk6x" event={"ID":"ed78771e-329e-47a4-be0e-fa85fb5eba7d","Type":"ContainerDied","Data":"d7f4d40679c6d26d630b5ccad11aecb6777f98fd4715bd8e6c54e47a06c617f5"} Nov 23 08:40:10 crc kubenswrapper[5028]: I1123 08:40:10.323606 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bxk6x" event={"ID":"ed78771e-329e-47a4-be0e-fa85fb5eba7d","Type":"ContainerStarted","Data":"0cdd6cb758a353e0e9c60cf8a09b63ddc012af7231d47eb6f4c7803430ac888b"} Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.770875 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.800489 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xns4\" (UniqueName: \"kubernetes.io/projected/ed78771e-329e-47a4-be0e-fa85fb5eba7d-kube-api-access-6xns4\") pod \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.802175 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed78771e-329e-47a4-be0e-fa85fb5eba7d-operator-scripts\") pod \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\" (UID: \"ed78771e-329e-47a4-be0e-fa85fb5eba7d\") " Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.803397 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed78771e-329e-47a4-be0e-fa85fb5eba7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed78771e-329e-47a4-be0e-fa85fb5eba7d" (UID: "ed78771e-329e-47a4-be0e-fa85fb5eba7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.838981 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed78771e-329e-47a4-be0e-fa85fb5eba7d-kube-api-access-6xns4" (OuterVolumeSpecName: "kube-api-access-6xns4") pod "ed78771e-329e-47a4-be0e-fa85fb5eba7d" (UID: "ed78771e-329e-47a4-be0e-fa85fb5eba7d"). InnerVolumeSpecName "kube-api-access-6xns4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.877857 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.905553 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17dc3da-8776-40a5-a2a3-2f86ae78be13-operator-scripts\") pod \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.905860 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcbdn\" (UniqueName: \"kubernetes.io/projected/d17dc3da-8776-40a5-a2a3-2f86ae78be13-kube-api-access-hcbdn\") pod \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\" (UID: \"d17dc3da-8776-40a5-a2a3-2f86ae78be13\") " Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.906247 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17dc3da-8776-40a5-a2a3-2f86ae78be13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d17dc3da-8776-40a5-a2a3-2f86ae78be13" (UID: "d17dc3da-8776-40a5-a2a3-2f86ae78be13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.906434 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xns4\" (UniqueName: \"kubernetes.io/projected/ed78771e-329e-47a4-be0e-fa85fb5eba7d-kube-api-access-6xns4\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.906448 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d17dc3da-8776-40a5-a2a3-2f86ae78be13-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.906458 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed78771e-329e-47a4-be0e-fa85fb5eba7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:11 crc kubenswrapper[5028]: I1123 08:40:11.910755 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17dc3da-8776-40a5-a2a3-2f86ae78be13-kube-api-access-hcbdn" (OuterVolumeSpecName: "kube-api-access-hcbdn") pod "d17dc3da-8776-40a5-a2a3-2f86ae78be13" (UID: "d17dc3da-8776-40a5-a2a3-2f86ae78be13"). InnerVolumeSpecName "kube-api-access-hcbdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.008185 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcbdn\" (UniqueName: \"kubernetes.io/projected/d17dc3da-8776-40a5-a2a3-2f86ae78be13-kube-api-access-hcbdn\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.346270 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-aad8-account-create-v8929" event={"ID":"d17dc3da-8776-40a5-a2a3-2f86ae78be13","Type":"ContainerDied","Data":"40452b71ea570b8504906783af80780db663be3bd1ff4b7f07dcb0b70ad85114"} Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.346332 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40452b71ea570b8504906783af80780db663be3bd1ff4b7f07dcb0b70ad85114" Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.346342 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-aad8-account-create-v8929" Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.348432 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bxk6x" event={"ID":"ed78771e-329e-47a4-be0e-fa85fb5eba7d","Type":"ContainerDied","Data":"0cdd6cb758a353e0e9c60cf8a09b63ddc012af7231d47eb6f4c7803430ac888b"} Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.348463 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdd6cb758a353e0e9c60cf8a09b63ddc012af7231d47eb6f4c7803430ac888b" Nov 23 08:40:12 crc kubenswrapper[5028]: I1123 08:40:12.348524 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bxk6x" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.640304 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vdvks"] Nov 23 08:40:14 crc kubenswrapper[5028]: E1123 08:40:14.641081 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17dc3da-8776-40a5-a2a3-2f86ae78be13" containerName="mariadb-account-create" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.641094 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17dc3da-8776-40a5-a2a3-2f86ae78be13" containerName="mariadb-account-create" Nov 23 08:40:14 crc kubenswrapper[5028]: E1123 08:40:14.641105 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed78771e-329e-47a4-be0e-fa85fb5eba7d" containerName="mariadb-database-create" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.641124 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed78771e-329e-47a4-be0e-fa85fb5eba7d" containerName="mariadb-database-create" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.641292 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed78771e-329e-47a4-be0e-fa85fb5eba7d" containerName="mariadb-database-create" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.641309 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17dc3da-8776-40a5-a2a3-2f86ae78be13" containerName="mariadb-account-create" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.641901 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.646197 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.646420 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.646811 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-45b8s" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.647446 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.670156 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-combined-ca-bundle\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.670232 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxwd\" (UniqueName: \"kubernetes.io/projected/e9d65160-2c51-40af-88b1-bd26e76c2a42-kube-api-access-tvxwd\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.670315 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-config-data\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.672807 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vdvks"] Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.772571 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-combined-ca-bundle\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.772667 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxwd\" (UniqueName: \"kubernetes.io/projected/e9d65160-2c51-40af-88b1-bd26e76c2a42-kube-api-access-tvxwd\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.772723 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-config-data\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.786917 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-combined-ca-bundle\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.789913 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-config-data\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.800792 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxwd\" (UniqueName: \"kubernetes.io/projected/e9d65160-2c51-40af-88b1-bd26e76c2a42-kube-api-access-tvxwd\") pod \"keystone-db-sync-vdvks\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:14 crc kubenswrapper[5028]: I1123 08:40:14.964739 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:15 crc kubenswrapper[5028]: I1123 08:40:15.284287 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vdvks"] Nov 23 08:40:15 crc kubenswrapper[5028]: I1123 08:40:15.378560 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vdvks" event={"ID":"e9d65160-2c51-40af-88b1-bd26e76c2a42","Type":"ContainerStarted","Data":"150efe36fa325ff2977981dfd312a8e6b7b364f4b2b2b1f301a204fe75d716a4"} Nov 23 08:40:16 crc kubenswrapper[5028]: I1123 08:40:16.526808 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 23 08:40:21 crc kubenswrapper[5028]: I1123 08:40:21.475021 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vdvks" event={"ID":"e9d65160-2c51-40af-88b1-bd26e76c2a42","Type":"ContainerStarted","Data":"2b7147cec71e05e455c361bc4834576f5448ace823a31d9cafdd91c3b1b937aa"} Nov 23 08:40:21 crc kubenswrapper[5028]: I1123 08:40:21.495245 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vdvks" podStartSLOduration=2.290100394 podStartE2EDuration="7.49522394s" podCreationTimestamp="2025-11-23 08:40:14 +0000 UTC" firstStartedPulling="2025-11-23 08:40:15.292044695 +0000 UTC m=+6598.989449464" lastFinishedPulling="2025-11-23 08:40:20.497168231 +0000 UTC m=+6604.194573010" observedRunningTime="2025-11-23 08:40:21.493069107 +0000 UTC m=+6605.190473886" watchObservedRunningTime="2025-11-23 08:40:21.49522394 +0000 UTC m=+6605.192628719" Nov 23 08:40:22 crc kubenswrapper[5028]: I1123 08:40:22.490677 5028 generic.go:334] "Generic (PLEG): container finished" podID="e9d65160-2c51-40af-88b1-bd26e76c2a42" containerID="2b7147cec71e05e455c361bc4834576f5448ace823a31d9cafdd91c3b1b937aa" exitCode=0 Nov 23 08:40:22 crc kubenswrapper[5028]: I1123 08:40:22.491278 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vdvks" event={"ID":"e9d65160-2c51-40af-88b1-bd26e76c2a42","Type":"ContainerDied","Data":"2b7147cec71e05e455c361bc4834576f5448ace823a31d9cafdd91c3b1b937aa"} Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.827549 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.915894 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxwd\" (UniqueName: \"kubernetes.io/projected/e9d65160-2c51-40af-88b1-bd26e76c2a42-kube-api-access-tvxwd\") pod \"e9d65160-2c51-40af-88b1-bd26e76c2a42\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.916035 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-config-data\") pod \"e9d65160-2c51-40af-88b1-bd26e76c2a42\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.916084 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-combined-ca-bundle\") pod \"e9d65160-2c51-40af-88b1-bd26e76c2a42\" (UID: \"e9d65160-2c51-40af-88b1-bd26e76c2a42\") " Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.923985 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d65160-2c51-40af-88b1-bd26e76c2a42-kube-api-access-tvxwd" (OuterVolumeSpecName: "kube-api-access-tvxwd") pod "e9d65160-2c51-40af-88b1-bd26e76c2a42" (UID: "e9d65160-2c51-40af-88b1-bd26e76c2a42"). InnerVolumeSpecName "kube-api-access-tvxwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.969506 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9d65160-2c51-40af-88b1-bd26e76c2a42" (UID: "e9d65160-2c51-40af-88b1-bd26e76c2a42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:23 crc kubenswrapper[5028]: I1123 08:40:23.976146 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-config-data" (OuterVolumeSpecName: "config-data") pod "e9d65160-2c51-40af-88b1-bd26e76c2a42" (UID: "e9d65160-2c51-40af-88b1-bd26e76c2a42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.019065 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxwd\" (UniqueName: \"kubernetes.io/projected/e9d65160-2c51-40af-88b1-bd26e76c2a42-kube-api-access-tvxwd\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.019151 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.019182 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d65160-2c51-40af-88b1-bd26e76c2a42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.511328 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vdvks" event={"ID":"e9d65160-2c51-40af-88b1-bd26e76c2a42","Type":"ContainerDied","Data":"150efe36fa325ff2977981dfd312a8e6b7b364f4b2b2b1f301a204fe75d716a4"} Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.511730 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150efe36fa325ff2977981dfd312a8e6b7b364f4b2b2b1f301a204fe75d716a4" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.511401 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vdvks" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.847014 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68445c757c-lwxlx"] Nov 23 08:40:24 crc kubenswrapper[5028]: E1123 08:40:24.847479 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d65160-2c51-40af-88b1-bd26e76c2a42" containerName="keystone-db-sync" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.847493 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d65160-2c51-40af-88b1-bd26e76c2a42" containerName="keystone-db-sync" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.847694 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d65160-2c51-40af-88b1-bd26e76c2a42" containerName="keystone-db-sync" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.848911 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.882660 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wpqzh"] Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.884800 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.890034 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-45b8s" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.890345 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.890498 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.890629 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.895787 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.901339 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68445c757c-lwxlx"] Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.914288 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wpqzh"] Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.936525 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-config-data\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.936756 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-config\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.936860 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6kv9\" (UniqueName: \"kubernetes.io/projected/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-kube-api-access-q6kv9\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.936940 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7xjm\" (UniqueName: \"kubernetes.io/projected/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-kube-api-access-v7xjm\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.936993 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-combined-ca-bundle\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.937053 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-dns-svc\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.937163 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-sb\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.937264 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-credential-keys\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.937312 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-scripts\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.937336 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-fernet-keys\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:24 crc kubenswrapper[5028]: I1123 08:40:24.937363 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-nb\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040055 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-sb\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040354 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-credential-keys\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040446 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-scripts\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040517 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-fernet-keys\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040580 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-nb\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040660 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-config-data\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040764 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-config\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040835 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6kv9\" (UniqueName: \"kubernetes.io/projected/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-kube-api-access-q6kv9\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.040911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7xjm\" (UniqueName: \"kubernetes.io/projected/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-kube-api-access-v7xjm\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.041001 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-combined-ca-bundle\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.041076 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-sb\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.041170 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-dns-svc\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.042652 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-config\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.043089 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-dns-svc\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.043081 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-nb\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.050358 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-combined-ca-bundle\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.051238 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-scripts\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.051273 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-fernet-keys\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.055324 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-config-data\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.057556 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-credential-keys\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.065915 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7xjm\" (UniqueName: \"kubernetes.io/projected/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-kube-api-access-v7xjm\") pod \"keystone-bootstrap-wpqzh\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.070795 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6kv9\" (UniqueName: \"kubernetes.io/projected/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-kube-api-access-q6kv9\") pod \"dnsmasq-dns-68445c757c-lwxlx\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.179375 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.208743 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.680920 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68445c757c-lwxlx"] Nov 23 08:40:25 crc kubenswrapper[5028]: W1123 08:40:25.686582 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f6e1f2_208a_47dd_ac12_14843ddf0d7a.slice/crio-39f8c01db36afef937d364875738905019083770ac88a515bbc14f061488ad60 WatchSource:0}: Error finding container 39f8c01db36afef937d364875738905019083770ac88a515bbc14f061488ad60: Status 404 returned error can't find the container with id 39f8c01db36afef937d364875738905019083770ac88a515bbc14f061488ad60 Nov 23 08:40:25 crc kubenswrapper[5028]: I1123 08:40:25.737759 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wpqzh"] Nov 23 08:40:26 crc kubenswrapper[5028]: I1123 08:40:26.532102 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wpqzh" event={"ID":"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538","Type":"ContainerStarted","Data":"2fa8196c86108cbd58f2f965a955bd8d0600cf4180af89101b90ba9a1238f9d2"} Nov 23 08:40:26 crc kubenswrapper[5028]: I1123 08:40:26.532584 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wpqzh" event={"ID":"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538","Type":"ContainerStarted","Data":"1be62487b55064bbe643ad47398b323084e5871b43341424c96cf8559c677c8d"} Nov 23 08:40:26 crc kubenswrapper[5028]: I1123 08:40:26.535333 5028 generic.go:334] "Generic (PLEG): container finished" podID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerID="7ab45db4e29f738b232f9b2f82eba1ad465de876c87c93c2c30ee61a4d3de7dc" exitCode=0 Nov 23 08:40:26 crc kubenswrapper[5028]: I1123 08:40:26.535370 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" event={"ID":"86f6e1f2-208a-47dd-ac12-14843ddf0d7a","Type":"ContainerDied","Data":"7ab45db4e29f738b232f9b2f82eba1ad465de876c87c93c2c30ee61a4d3de7dc"} Nov 23 08:40:26 crc kubenswrapper[5028]: I1123 08:40:26.535389 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" event={"ID":"86f6e1f2-208a-47dd-ac12-14843ddf0d7a","Type":"ContainerStarted","Data":"39f8c01db36afef937d364875738905019083770ac88a515bbc14f061488ad60"} Nov 23 08:40:26 crc kubenswrapper[5028]: I1123 08:40:26.567591 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wpqzh" podStartSLOduration=2.567566548 podStartE2EDuration="2.567566548s" podCreationTimestamp="2025-11-23 08:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:40:26.564189185 +0000 UTC m=+6610.261593994" watchObservedRunningTime="2025-11-23 08:40:26.567566548 +0000 UTC m=+6610.264971327" Nov 23 08:40:27 crc kubenswrapper[5028]: I1123 08:40:27.551628 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" event={"ID":"86f6e1f2-208a-47dd-ac12-14843ddf0d7a","Type":"ContainerStarted","Data":"6880882d84c2eceb815d2809c664850d6ab07ade76d246fab4eff968bb98d2e3"} Nov 23 08:40:27 crc kubenswrapper[5028]: I1123 08:40:27.593628 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" podStartSLOduration=3.593604403 podStartE2EDuration="3.593604403s" podCreationTimestamp="2025-11-23 08:40:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:40:27.580978504 +0000 UTC m=+6611.278383333" watchObservedRunningTime="2025-11-23 08:40:27.593604403 +0000 UTC m=+6611.291009192" Nov 23 08:40:28 crc kubenswrapper[5028]: I1123 08:40:28.558769 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:29 crc kubenswrapper[5028]: I1123 08:40:29.573435 5028 generic.go:334] "Generic (PLEG): container finished" podID="1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" containerID="2fa8196c86108cbd58f2f965a955bd8d0600cf4180af89101b90ba9a1238f9d2" exitCode=0 Nov 23 08:40:29 crc kubenswrapper[5028]: I1123 08:40:29.573566 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wpqzh" event={"ID":"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538","Type":"ContainerDied","Data":"2fa8196c86108cbd58f2f965a955bd8d0600cf4180af89101b90ba9a1238f9d2"} Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.946595 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.973149 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-combined-ca-bundle\") pod \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.973338 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-fernet-keys\") pod \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.973539 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-scripts\") pod \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.973597 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-config-data\") pod \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.973669 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-credential-keys\") pod \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.974902 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7xjm\" (UniqueName: \"kubernetes.io/projected/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-kube-api-access-v7xjm\") pod \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\" (UID: \"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538\") " Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.982246 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" (UID: "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.982328 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-kube-api-access-v7xjm" (OuterVolumeSpecName: "kube-api-access-v7xjm") pod "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" (UID: "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538"). InnerVolumeSpecName "kube-api-access-v7xjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.983440 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" (UID: "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:30 crc kubenswrapper[5028]: I1123 08:40:30.987457 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-scripts" (OuterVolumeSpecName: "scripts") pod "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" (UID: "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.007715 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-config-data" (OuterVolumeSpecName: "config-data") pod "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" (UID: "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.028999 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" (UID: "1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.077631 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.077667 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.077676 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.077688 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.077697 5028 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.077706 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7xjm\" (UniqueName: \"kubernetes.io/projected/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538-kube-api-access-v7xjm\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.602943 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wpqzh" event={"ID":"1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538","Type":"ContainerDied","Data":"1be62487b55064bbe643ad47398b323084e5871b43341424c96cf8559c677c8d"} Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.603008 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be62487b55064bbe643ad47398b323084e5871b43341424c96cf8559c677c8d" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.603050 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wpqzh" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.687688 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wpqzh"] Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.694315 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wpqzh"] Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.783459 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lz5zn"] Nov 23 08:40:31 crc kubenswrapper[5028]: E1123 08:40:31.783835 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" containerName="keystone-bootstrap" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.783852 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" containerName="keystone-bootstrap" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.784044 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" containerName="keystone-bootstrap" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.784616 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.786863 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.790480 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.790859 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.791069 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.791344 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-45b8s" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.800639 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lz5zn"] Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.893554 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-scripts\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.893659 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-credential-keys\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.893682 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-combined-ca-bundle\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.893729 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-config-data\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.893784 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-fernet-keys\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.893806 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mnjh\" (UniqueName: \"kubernetes.io/projected/bde709e3-05f1-4d49-ab07-dc201e568476-kube-api-access-5mnjh\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.995897 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-fernet-keys\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.996877 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mnjh\" (UniqueName: \"kubernetes.io/projected/bde709e3-05f1-4d49-ab07-dc201e568476-kube-api-access-5mnjh\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.998054 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-scripts\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.998667 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-credential-keys\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.998761 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-combined-ca-bundle\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:31 crc kubenswrapper[5028]: I1123 08:40:31.998908 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-config-data\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.000569 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-fernet-keys\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.011560 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-credential-keys\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.011881 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-config-data\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.012109 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-scripts\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.012275 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-combined-ca-bundle\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.016412 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mnjh\" (UniqueName: \"kubernetes.io/projected/bde709e3-05f1-4d49-ab07-dc201e568476-kube-api-access-5mnjh\") pod \"keystone-bootstrap-lz5zn\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.103828 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.588088 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lz5zn"] Nov 23 08:40:32 crc kubenswrapper[5028]: I1123 08:40:32.614534 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lz5zn" event={"ID":"bde709e3-05f1-4d49-ab07-dc201e568476","Type":"ContainerStarted","Data":"880a155f1196eab86d70022a02b0efb5e88abb401a1fa0a0891f0555579318dc"} Nov 23 08:40:33 crc kubenswrapper[5028]: I1123 08:40:33.071565 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538" path="/var/lib/kubelet/pods/1aaa3f3a-f9a6-4ead-a7bc-0ff8eb734538/volumes" Nov 23 08:40:33 crc kubenswrapper[5028]: I1123 08:40:33.628576 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lz5zn" event={"ID":"bde709e3-05f1-4d49-ab07-dc201e568476","Type":"ContainerStarted","Data":"f1fa2414eb0f1977f9b52ab134d016665a4ed8fe81318f1b254bb3eaf09cfdb1"} Nov 23 08:40:33 crc kubenswrapper[5028]: I1123 08:40:33.655338 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lz5zn" podStartSLOduration=2.655318778 podStartE2EDuration="2.655318778s" podCreationTimestamp="2025-11-23 08:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:40:33.653269047 +0000 UTC m=+6617.350673826" watchObservedRunningTime="2025-11-23 08:40:33.655318778 +0000 UTC m=+6617.352723557" Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.181212 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.274331 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657df5f45c-s72r9"] Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.288380 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerName="dnsmasq-dns" containerID="cri-o://54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3" gracePeriod=10 Nov 23 08:40:35 crc kubenswrapper[5028]: E1123 08:40:35.469209 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf4c47e_bde8_4d99_accf_2f7383405c64.slice/crio-conmon-54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf4c47e_bde8_4d99_accf_2f7383405c64.slice/crio-54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.651732 5028 generic.go:334] "Generic (PLEG): container finished" podID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerID="54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3" exitCode=0 Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.651793 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" event={"ID":"5cf4c47e-bde8-4d99-accf-2f7383405c64","Type":"ContainerDied","Data":"54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3"} Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.817188 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.945392 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-nb\") pod \"5cf4c47e-bde8-4d99-accf-2f7383405c64\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.945566 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-config\") pod \"5cf4c47e-bde8-4d99-accf-2f7383405c64\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.945665 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-dns-svc\") pod \"5cf4c47e-bde8-4d99-accf-2f7383405c64\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.945688 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fthjg\" (UniqueName: \"kubernetes.io/projected/5cf4c47e-bde8-4d99-accf-2f7383405c64-kube-api-access-fthjg\") pod \"5cf4c47e-bde8-4d99-accf-2f7383405c64\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.945721 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-sb\") pod \"5cf4c47e-bde8-4d99-accf-2f7383405c64\" (UID: \"5cf4c47e-bde8-4d99-accf-2f7383405c64\") " Nov 23 08:40:35 crc kubenswrapper[5028]: I1123 08:40:35.957512 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf4c47e-bde8-4d99-accf-2f7383405c64-kube-api-access-fthjg" (OuterVolumeSpecName: "kube-api-access-fthjg") pod "5cf4c47e-bde8-4d99-accf-2f7383405c64" (UID: "5cf4c47e-bde8-4d99-accf-2f7383405c64"). InnerVolumeSpecName "kube-api-access-fthjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.002132 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5cf4c47e-bde8-4d99-accf-2f7383405c64" (UID: "5cf4c47e-bde8-4d99-accf-2f7383405c64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.023055 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5cf4c47e-bde8-4d99-accf-2f7383405c64" (UID: "5cf4c47e-bde8-4d99-accf-2f7383405c64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.025888 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cf4c47e-bde8-4d99-accf-2f7383405c64" (UID: "5cf4c47e-bde8-4d99-accf-2f7383405c64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.025888 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-config" (OuterVolumeSpecName: "config") pod "5cf4c47e-bde8-4d99-accf-2f7383405c64" (UID: "5cf4c47e-bde8-4d99-accf-2f7383405c64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.047614 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.047660 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.047682 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fthjg\" (UniqueName: \"kubernetes.io/projected/5cf4c47e-bde8-4d99-accf-2f7383405c64-kube-api-access-fthjg\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.047702 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.047713 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cf4c47e-bde8-4d99-accf-2f7383405c64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.664789 5028 generic.go:334] "Generic (PLEG): container finished" podID="bde709e3-05f1-4d49-ab07-dc201e568476" containerID="f1fa2414eb0f1977f9b52ab134d016665a4ed8fe81318f1b254bb3eaf09cfdb1" exitCode=0 Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.664845 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lz5zn" event={"ID":"bde709e3-05f1-4d49-ab07-dc201e568476","Type":"ContainerDied","Data":"f1fa2414eb0f1977f9b52ab134d016665a4ed8fe81318f1b254bb3eaf09cfdb1"} Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.667207 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" event={"ID":"5cf4c47e-bde8-4d99-accf-2f7383405c64","Type":"ContainerDied","Data":"05ba855a990c4a21253b5964f0e9f5b8407a6439996ca59f0525775f09e08231"} Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.667252 5028 scope.go:117] "RemoveContainer" containerID="54ba5db027a33519af07a7c137057a1c6d5907406e690d0f9f933aa2f2fe00d3" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.667377 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-657df5f45c-s72r9" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.698212 5028 scope.go:117] "RemoveContainer" containerID="8dad258f23fe121485ad698ac9cb22c9e9eee3ee6ae67f1f9289e619b4f75b77" Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.720334 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-657df5f45c-s72r9"] Nov 23 08:40:36 crc kubenswrapper[5028]: I1123 08:40:36.726234 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-657df5f45c-s72r9"] Nov 23 08:40:37 crc kubenswrapper[5028]: I1123 08:40:37.064762 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" path="/var/lib/kubelet/pods/5cf4c47e-bde8-4d99-accf-2f7383405c64/volumes" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.042151 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.190080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-combined-ca-bundle\") pod \"bde709e3-05f1-4d49-ab07-dc201e568476\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.190350 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-config-data\") pod \"bde709e3-05f1-4d49-ab07-dc201e568476\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.190415 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-credential-keys\") pod \"bde709e3-05f1-4d49-ab07-dc201e568476\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.190449 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mnjh\" (UniqueName: \"kubernetes.io/projected/bde709e3-05f1-4d49-ab07-dc201e568476-kube-api-access-5mnjh\") pod \"bde709e3-05f1-4d49-ab07-dc201e568476\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.190598 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-scripts\") pod \"bde709e3-05f1-4d49-ab07-dc201e568476\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.190622 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-fernet-keys\") pod \"bde709e3-05f1-4d49-ab07-dc201e568476\" (UID: \"bde709e3-05f1-4d49-ab07-dc201e568476\") " Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.198236 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde709e3-05f1-4d49-ab07-dc201e568476-kube-api-access-5mnjh" (OuterVolumeSpecName: "kube-api-access-5mnjh") pod "bde709e3-05f1-4d49-ab07-dc201e568476" (UID: "bde709e3-05f1-4d49-ab07-dc201e568476"). InnerVolumeSpecName "kube-api-access-5mnjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.198265 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-scripts" (OuterVolumeSpecName: "scripts") pod "bde709e3-05f1-4d49-ab07-dc201e568476" (UID: "bde709e3-05f1-4d49-ab07-dc201e568476"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.198316 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bde709e3-05f1-4d49-ab07-dc201e568476" (UID: "bde709e3-05f1-4d49-ab07-dc201e568476"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.198332 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bde709e3-05f1-4d49-ab07-dc201e568476" (UID: "bde709e3-05f1-4d49-ab07-dc201e568476"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.218370 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-config-data" (OuterVolumeSpecName: "config-data") pod "bde709e3-05f1-4d49-ab07-dc201e568476" (UID: "bde709e3-05f1-4d49-ab07-dc201e568476"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.235581 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bde709e3-05f1-4d49-ab07-dc201e568476" (UID: "bde709e3-05f1-4d49-ab07-dc201e568476"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.293584 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.293621 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.293633 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.293645 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.293656 5028 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bde709e3-05f1-4d49-ab07-dc201e568476-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.293666 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mnjh\" (UniqueName: \"kubernetes.io/projected/bde709e3-05f1-4d49-ab07-dc201e568476-kube-api-access-5mnjh\") on node \"crc\" DevicePath \"\"" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.716426 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lz5zn" event={"ID":"bde709e3-05f1-4d49-ab07-dc201e568476","Type":"ContainerDied","Data":"880a155f1196eab86d70022a02b0efb5e88abb401a1fa0a0891f0555579318dc"} Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.717211 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="880a155f1196eab86d70022a02b0efb5e88abb401a1fa0a0891f0555579318dc" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.716982 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lz5zn" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.776656 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5fdfc97958-gprrf"] Nov 23 08:40:38 crc kubenswrapper[5028]: E1123 08:40:38.777126 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde709e3-05f1-4d49-ab07-dc201e568476" containerName="keystone-bootstrap" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.777147 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde709e3-05f1-4d49-ab07-dc201e568476" containerName="keystone-bootstrap" Nov 23 08:40:38 crc kubenswrapper[5028]: E1123 08:40:38.777180 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerName="dnsmasq-dns" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.777188 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerName="dnsmasq-dns" Nov 23 08:40:38 crc kubenswrapper[5028]: E1123 08:40:38.777199 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerName="init" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.777206 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerName="init" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.777405 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf4c47e-bde8-4d99-accf-2f7383405c64" containerName="dnsmasq-dns" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.777420 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde709e3-05f1-4d49-ab07-dc201e568476" containerName="keystone-bootstrap" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.778131 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.783597 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.783908 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.784132 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-45b8s" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.785439 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.819134 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-scripts\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.819315 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrvx\" (UniqueName: \"kubernetes.io/projected/f0dd1917-f821-48bf-bb71-f21b4116334d-kube-api-access-htrvx\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.819522 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-config-data\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.819606 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-fernet-keys\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.819658 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-credential-keys\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.822951 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-combined-ca-bundle\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.828109 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5fdfc97958-gprrf"] Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.925003 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-combined-ca-bundle\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.925412 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-scripts\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.925514 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htrvx\" (UniqueName: \"kubernetes.io/projected/f0dd1917-f821-48bf-bb71-f21b4116334d-kube-api-access-htrvx\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.925668 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-config-data\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.925747 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-fernet-keys\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.925812 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-credential-keys\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.932164 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-combined-ca-bundle\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.933999 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-config-data\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.934393 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-scripts\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.940048 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-credential-keys\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.945688 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0dd1917-f821-48bf-bb71-f21b4116334d-fernet-keys\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:38 crc kubenswrapper[5028]: I1123 08:40:38.968772 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htrvx\" (UniqueName: \"kubernetes.io/projected/f0dd1917-f821-48bf-bb71-f21b4116334d-kube-api-access-htrvx\") pod \"keystone-5fdfc97958-gprrf\" (UID: \"f0dd1917-f821-48bf-bb71-f21b4116334d\") " pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:39 crc kubenswrapper[5028]: I1123 08:40:39.119704 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:39 crc kubenswrapper[5028]: I1123 08:40:39.604784 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5fdfc97958-gprrf"] Nov 23 08:40:39 crc kubenswrapper[5028]: W1123 08:40:39.608113 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0dd1917_f821_48bf_bb71_f21b4116334d.slice/crio-1276062d86f9e516873afa0a0f15d15e2f1390d4093b5ea8b8609f62fed02db2 WatchSource:0}: Error finding container 1276062d86f9e516873afa0a0f15d15e2f1390d4093b5ea8b8609f62fed02db2: Status 404 returned error can't find the container with id 1276062d86f9e516873afa0a0f15d15e2f1390d4093b5ea8b8609f62fed02db2 Nov 23 08:40:39 crc kubenswrapper[5028]: I1123 08:40:39.729249 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5fdfc97958-gprrf" event={"ID":"f0dd1917-f821-48bf-bb71-f21b4116334d","Type":"ContainerStarted","Data":"1276062d86f9e516873afa0a0f15d15e2f1390d4093b5ea8b8609f62fed02db2"} Nov 23 08:40:40 crc kubenswrapper[5028]: I1123 08:40:40.739906 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5fdfc97958-gprrf" event={"ID":"f0dd1917-f821-48bf-bb71-f21b4116334d","Type":"ContainerStarted","Data":"38034c33c93f4c71553a6c24562e2f50ec5f830f9095d33c0775f30682b8571b"} Nov 23 08:40:40 crc kubenswrapper[5028]: I1123 08:40:40.741297 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:40:40 crc kubenswrapper[5028]: I1123 08:40:40.778905 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5fdfc97958-gprrf" podStartSLOduration=2.778876703 podStartE2EDuration="2.778876703s" podCreationTimestamp="2025-11-23 08:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:40:40.772284782 +0000 UTC m=+6624.469689561" watchObservedRunningTime="2025-11-23 08:40:40.778876703 +0000 UTC m=+6624.476281482" Nov 23 08:41:10 crc kubenswrapper[5028]: I1123 08:41:10.722799 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5fdfc97958-gprrf" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.037891 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.039775 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.042071 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.043178 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-xgg6c" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.043542 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.059534 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.120156 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxp2g\" (UniqueName: \"kubernetes.io/projected/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-kube-api-access-bxp2g\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.120286 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config-secret\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.120315 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.221835 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxp2g\" (UniqueName: \"kubernetes.io/projected/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-kube-api-access-bxp2g\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.221922 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config-secret\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.221978 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.222861 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.234603 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config-secret\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.247110 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxp2g\" (UniqueName: \"kubernetes.io/projected/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-kube-api-access-bxp2g\") pod \"openstackclient\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.389688 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.903787 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:41:14 crc kubenswrapper[5028]: W1123 08:41:14.922389 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod563d9dc8_9dfd_4e9f_99e2_78f24bbf8d41.slice/crio-83d4854383737e7f23dfae9bdeeda2a2faad4cd946791ddbf095ab968a3b5fa0 WatchSource:0}: Error finding container 83d4854383737e7f23dfae9bdeeda2a2faad4cd946791ddbf095ab968a3b5fa0: Status 404 returned error can't find the container with id 83d4854383737e7f23dfae9bdeeda2a2faad4cd946791ddbf095ab968a3b5fa0 Nov 23 08:41:14 crc kubenswrapper[5028]: I1123 08:41:14.927561 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:41:15 crc kubenswrapper[5028]: I1123 08:41:15.102649 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41","Type":"ContainerStarted","Data":"83d4854383737e7f23dfae9bdeeda2a2faad4cd946791ddbf095ab968a3b5fa0"} Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.533205 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-snwc5"] Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.537450 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.550469 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snwc5"] Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.618998 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-utilities\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.619154 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjdd\" (UniqueName: \"kubernetes.io/projected/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-kube-api-access-qtjdd\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.619214 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-catalog-content\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.723452 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-utilities\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.723614 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjdd\" (UniqueName: \"kubernetes.io/projected/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-kube-api-access-qtjdd\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.723655 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-catalog-content\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.724205 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-catalog-content\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.725149 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-utilities\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.748282 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjdd\" (UniqueName: \"kubernetes.io/projected/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-kube-api-access-qtjdd\") pod \"redhat-marketplace-snwc5\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:18 crc kubenswrapper[5028]: I1123 08:41:18.898857 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:19 crc kubenswrapper[5028]: I1123 08:41:19.379225 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snwc5"] Nov 23 08:41:19 crc kubenswrapper[5028]: W1123 08:41:19.392174 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ee638f_99ac_4f86_9b8a_4e8ad2feca25.slice/crio-6702a0336e57166c8d0502b11731e54f0a7b36a6edeab68c81c652a6679448ff WatchSource:0}: Error finding container 6702a0336e57166c8d0502b11731e54f0a7b36a6edeab68c81c652a6679448ff: Status 404 returned error can't find the container with id 6702a0336e57166c8d0502b11731e54f0a7b36a6edeab68c81c652a6679448ff Nov 23 08:41:20 crc kubenswrapper[5028]: I1123 08:41:20.150351 5028 generic.go:334] "Generic (PLEG): container finished" podID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerID="08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca" exitCode=0 Nov 23 08:41:20 crc kubenswrapper[5028]: I1123 08:41:20.150445 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snwc5" event={"ID":"91ee638f-99ac-4f86-9b8a-4e8ad2feca25","Type":"ContainerDied","Data":"08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca"} Nov 23 08:41:20 crc kubenswrapper[5028]: I1123 08:41:20.150624 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snwc5" event={"ID":"91ee638f-99ac-4f86-9b8a-4e8ad2feca25","Type":"ContainerStarted","Data":"6702a0336e57166c8d0502b11731e54f0a7b36a6edeab68c81c652a6679448ff"} Nov 23 08:41:26 crc kubenswrapper[5028]: I1123 08:41:26.241060 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41","Type":"ContainerStarted","Data":"8c12aafbf21cd516e847e4ab88b5f3351deb8a99c90b60acd1c30f036fd53f85"} Nov 23 08:41:26 crc kubenswrapper[5028]: I1123 08:41:26.249519 5028 generic.go:334] "Generic (PLEG): container finished" podID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerID="c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce" exitCode=0 Nov 23 08:41:26 crc kubenswrapper[5028]: I1123 08:41:26.249648 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snwc5" event={"ID":"91ee638f-99ac-4f86-9b8a-4e8ad2feca25","Type":"ContainerDied","Data":"c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce"} Nov 23 08:41:26 crc kubenswrapper[5028]: I1123 08:41:26.281799 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.725394952 podStartE2EDuration="12.281772027s" podCreationTimestamp="2025-11-23 08:41:14 +0000 UTC" firstStartedPulling="2025-11-23 08:41:14.927115919 +0000 UTC m=+6658.624520728" lastFinishedPulling="2025-11-23 08:41:25.483493014 +0000 UTC m=+6669.180897803" observedRunningTime="2025-11-23 08:41:26.269309132 +0000 UTC m=+6669.966713931" watchObservedRunningTime="2025-11-23 08:41:26.281772027 +0000 UTC m=+6669.979176806" Nov 23 08:41:27 crc kubenswrapper[5028]: I1123 08:41:27.261041 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snwc5" event={"ID":"91ee638f-99ac-4f86-9b8a-4e8ad2feca25","Type":"ContainerStarted","Data":"71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668"} Nov 23 08:41:27 crc kubenswrapper[5028]: I1123 08:41:27.288740 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-snwc5" podStartSLOduration=2.54666165 podStartE2EDuration="9.288718315s" podCreationTimestamp="2025-11-23 08:41:18 +0000 UTC" firstStartedPulling="2025-11-23 08:41:20.152583372 +0000 UTC m=+6663.849988151" lastFinishedPulling="2025-11-23 08:41:26.894640037 +0000 UTC m=+6670.592044816" observedRunningTime="2025-11-23 08:41:27.280264878 +0000 UTC m=+6670.977669657" watchObservedRunningTime="2025-11-23 08:41:27.288718315 +0000 UTC m=+6670.986123094" Nov 23 08:41:28 crc kubenswrapper[5028]: I1123 08:41:28.899375 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:28 crc kubenswrapper[5028]: I1123 08:41:28.901201 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:28 crc kubenswrapper[5028]: I1123 08:41:28.944865 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:30 crc kubenswrapper[5028]: I1123 08:41:30.946988 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:41:30 crc kubenswrapper[5028]: I1123 08:41:30.947068 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:41:38 crc kubenswrapper[5028]: I1123 08:41:38.956833 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.015883 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snwc5"] Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.380616 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-snwc5" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="registry-server" containerID="cri-o://71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668" gracePeriod=2 Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.912337 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.953500 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-utilities\") pod \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.953702 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtjdd\" (UniqueName: \"kubernetes.io/projected/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-kube-api-access-qtjdd\") pod \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.953740 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-catalog-content\") pod \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\" (UID: \"91ee638f-99ac-4f86-9b8a-4e8ad2feca25\") " Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.960141 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-utilities" (OuterVolumeSpecName: "utilities") pod "91ee638f-99ac-4f86-9b8a-4e8ad2feca25" (UID: "91ee638f-99ac-4f86-9b8a-4e8ad2feca25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.973250 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-kube-api-access-qtjdd" (OuterVolumeSpecName: "kube-api-access-qtjdd") pod "91ee638f-99ac-4f86-9b8a-4e8ad2feca25" (UID: "91ee638f-99ac-4f86-9b8a-4e8ad2feca25"). InnerVolumeSpecName "kube-api-access-qtjdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:41:39 crc kubenswrapper[5028]: I1123 08:41:39.979756 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91ee638f-99ac-4f86-9b8a-4e8ad2feca25" (UID: "91ee638f-99ac-4f86-9b8a-4e8ad2feca25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.055432 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtjdd\" (UniqueName: \"kubernetes.io/projected/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-kube-api-access-qtjdd\") on node \"crc\" DevicePath \"\"" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.055757 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.055771 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91ee638f-99ac-4f86-9b8a-4e8ad2feca25-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.397252 5028 generic.go:334] "Generic (PLEG): container finished" podID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerID="71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668" exitCode=0 Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.397651 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snwc5" event={"ID":"91ee638f-99ac-4f86-9b8a-4e8ad2feca25","Type":"ContainerDied","Data":"71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668"} Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.397793 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snwc5" event={"ID":"91ee638f-99ac-4f86-9b8a-4e8ad2feca25","Type":"ContainerDied","Data":"6702a0336e57166c8d0502b11731e54f0a7b36a6edeab68c81c652a6679448ff"} Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.397819 5028 scope.go:117] "RemoveContainer" containerID="71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.398101 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snwc5" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.423726 5028 scope.go:117] "RemoveContainer" containerID="c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.445679 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snwc5"] Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.456220 5028 scope.go:117] "RemoveContainer" containerID="08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.458907 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-snwc5"] Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.486586 5028 scope.go:117] "RemoveContainer" containerID="71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668" Nov 23 08:41:40 crc kubenswrapper[5028]: E1123 08:41:40.487356 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668\": container with ID starting with 71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668 not found: ID does not exist" containerID="71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.487420 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668"} err="failed to get container status \"71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668\": rpc error: code = NotFound desc = could not find container \"71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668\": container with ID starting with 71e206429e8d45065f24b1ca6fa161e842c0d903e3f96854ad7ea4b482c15668 not found: ID does not exist" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.487460 5028 scope.go:117] "RemoveContainer" containerID="c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce" Nov 23 08:41:40 crc kubenswrapper[5028]: E1123 08:41:40.487874 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce\": container with ID starting with c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce not found: ID does not exist" containerID="c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.487927 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce"} err="failed to get container status \"c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce\": rpc error: code = NotFound desc = could not find container \"c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce\": container with ID starting with c6b69a82d6170c791bfcf1299e54088fe1efe36d5c7a77818a2a2a462139d4ce not found: ID does not exist" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.487980 5028 scope.go:117] "RemoveContainer" containerID="08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca" Nov 23 08:41:40 crc kubenswrapper[5028]: E1123 08:41:40.488262 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca\": container with ID starting with 08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca not found: ID does not exist" containerID="08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca" Nov 23 08:41:40 crc kubenswrapper[5028]: I1123 08:41:40.488296 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca"} err="failed to get container status \"08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca\": rpc error: code = NotFound desc = could not find container \"08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca\": container with ID starting with 08d38635457c0dffacd6d388b133a6208f911ba7f5fa4513f4742a37332e5aca not found: ID does not exist" Nov 23 08:41:41 crc kubenswrapper[5028]: I1123 08:41:41.071097 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" path="/var/lib/kubelet/pods/91ee638f-99ac-4f86-9b8a-4e8ad2feca25/volumes" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.713603 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5k8d2"] Nov 23 08:41:49 crc kubenswrapper[5028]: E1123 08:41:49.715439 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="registry-server" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.715465 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="registry-server" Nov 23 08:41:49 crc kubenswrapper[5028]: E1123 08:41:49.715504 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="extract-content" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.716049 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="extract-content" Nov 23 08:41:49 crc kubenswrapper[5028]: E1123 08:41:49.716094 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="extract-utilities" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.717684 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="extract-utilities" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.718015 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="91ee638f-99ac-4f86-9b8a-4e8ad2feca25" containerName="registry-server" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.721094 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.736624 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5k8d2"] Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.873346 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbzp\" (UniqueName: \"kubernetes.io/projected/30296944-870a-481f-8c63-24755c7e2404-kube-api-access-btbzp\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.873486 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-utilities\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.873522 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-catalog-content\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.975162 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btbzp\" (UniqueName: \"kubernetes.io/projected/30296944-870a-481f-8c63-24755c7e2404-kube-api-access-btbzp\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.975281 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-utilities\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.975307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-catalog-content\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.976133 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-catalog-content\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:49 crc kubenswrapper[5028]: I1123 08:41:49.976171 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-utilities\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:50 crc kubenswrapper[5028]: I1123 08:41:50.009553 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btbzp\" (UniqueName: \"kubernetes.io/projected/30296944-870a-481f-8c63-24755c7e2404-kube-api-access-btbzp\") pod \"community-operators-5k8d2\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:50 crc kubenswrapper[5028]: I1123 08:41:50.064738 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:41:50 crc kubenswrapper[5028]: I1123 08:41:50.618566 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5k8d2"] Nov 23 08:41:51 crc kubenswrapper[5028]: I1123 08:41:51.530365 5028 generic.go:334] "Generic (PLEG): container finished" podID="30296944-870a-481f-8c63-24755c7e2404" containerID="f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942" exitCode=0 Nov 23 08:41:51 crc kubenswrapper[5028]: I1123 08:41:51.530584 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k8d2" event={"ID":"30296944-870a-481f-8c63-24755c7e2404","Type":"ContainerDied","Data":"f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942"} Nov 23 08:41:51 crc kubenswrapper[5028]: I1123 08:41:51.530916 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k8d2" event={"ID":"30296944-870a-481f-8c63-24755c7e2404","Type":"ContainerStarted","Data":"c9dd3ce915b14cba38a8c9e2ea196dc9480d498a6dadd417c95488d6657c1d71"} Nov 23 08:41:53 crc kubenswrapper[5028]: I1123 08:41:53.556194 5028 generic.go:334] "Generic (PLEG): container finished" podID="30296944-870a-481f-8c63-24755c7e2404" containerID="75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc" exitCode=0 Nov 23 08:41:53 crc kubenswrapper[5028]: I1123 08:41:53.556235 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k8d2" event={"ID":"30296944-870a-481f-8c63-24755c7e2404","Type":"ContainerDied","Data":"75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc"} Nov 23 08:41:54 crc kubenswrapper[5028]: I1123 08:41:54.570125 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k8d2" event={"ID":"30296944-870a-481f-8c63-24755c7e2404","Type":"ContainerStarted","Data":"9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f"} Nov 23 08:41:54 crc kubenswrapper[5028]: I1123 08:41:54.592527 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5k8d2" podStartSLOduration=3.15809476 podStartE2EDuration="5.592505101s" podCreationTimestamp="2025-11-23 08:41:49 +0000 UTC" firstStartedPulling="2025-11-23 08:41:51.535987606 +0000 UTC m=+6695.233392615" lastFinishedPulling="2025-11-23 08:41:53.970398177 +0000 UTC m=+6697.667802956" observedRunningTime="2025-11-23 08:41:54.588153435 +0000 UTC m=+6698.285558214" watchObservedRunningTime="2025-11-23 08:41:54.592505101 +0000 UTC m=+6698.289909880" Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.065633 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.067777 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.137490 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.718910 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.784774 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5k8d2"] Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.947413 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:42:00 crc kubenswrapper[5028]: I1123 08:42:00.948298 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:42:02 crc kubenswrapper[5028]: I1123 08:42:02.659022 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5k8d2" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="registry-server" containerID="cri-o://9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f" gracePeriod=2 Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.179877 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.265706 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-catalog-content\") pod \"30296944-870a-481f-8c63-24755c7e2404\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.265858 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btbzp\" (UniqueName: \"kubernetes.io/projected/30296944-870a-481f-8c63-24755c7e2404-kube-api-access-btbzp\") pod \"30296944-870a-481f-8c63-24755c7e2404\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.266024 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-utilities\") pod \"30296944-870a-481f-8c63-24755c7e2404\" (UID: \"30296944-870a-481f-8c63-24755c7e2404\") " Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.267771 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-utilities" (OuterVolumeSpecName: "utilities") pod "30296944-870a-481f-8c63-24755c7e2404" (UID: "30296944-870a-481f-8c63-24755c7e2404"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.273514 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30296944-870a-481f-8c63-24755c7e2404-kube-api-access-btbzp" (OuterVolumeSpecName: "kube-api-access-btbzp") pod "30296944-870a-481f-8c63-24755c7e2404" (UID: "30296944-870a-481f-8c63-24755c7e2404"). InnerVolumeSpecName "kube-api-access-btbzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.368739 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btbzp\" (UniqueName: \"kubernetes.io/projected/30296944-870a-481f-8c63-24755c7e2404-kube-api-access-btbzp\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.368773 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.480709 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30296944-870a-481f-8c63-24755c7e2404" (UID: "30296944-870a-481f-8c63-24755c7e2404"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.572642 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30296944-870a-481f-8c63-24755c7e2404-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.676471 5028 generic.go:334] "Generic (PLEG): container finished" podID="30296944-870a-481f-8c63-24755c7e2404" containerID="9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f" exitCode=0 Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.676531 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k8d2" event={"ID":"30296944-870a-481f-8c63-24755c7e2404","Type":"ContainerDied","Data":"9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f"} Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.676583 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k8d2" event={"ID":"30296944-870a-481f-8c63-24755c7e2404","Type":"ContainerDied","Data":"c9dd3ce915b14cba38a8c9e2ea196dc9480d498a6dadd417c95488d6657c1d71"} Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.676603 5028 scope.go:117] "RemoveContainer" containerID="9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.676752 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k8d2" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.732242 5028 scope.go:117] "RemoveContainer" containerID="75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.733638 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5k8d2"] Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.742116 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5k8d2"] Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.772725 5028 scope.go:117] "RemoveContainer" containerID="f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.813291 5028 scope.go:117] "RemoveContainer" containerID="9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f" Nov 23 08:42:03 crc kubenswrapper[5028]: E1123 08:42:03.814311 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f\": container with ID starting with 9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f not found: ID does not exist" containerID="9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.814379 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f"} err="failed to get container status \"9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f\": rpc error: code = NotFound desc = could not find container \"9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f\": container with ID starting with 9baef0715192b9e9c79ee450c2d5df49fd6fa9c462de378f7fe8b96abfd6736f not found: ID does not exist" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.814423 5028 scope.go:117] "RemoveContainer" containerID="75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc" Nov 23 08:42:03 crc kubenswrapper[5028]: E1123 08:42:03.815061 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc\": container with ID starting with 75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc not found: ID does not exist" containerID="75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.815137 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc"} err="failed to get container status \"75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc\": rpc error: code = NotFound desc = could not find container \"75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc\": container with ID starting with 75e6e709a5abedbced86e487f89d3839aa685782f6aa2c7cb3265c77d8a559bc not found: ID does not exist" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.815192 5028 scope.go:117] "RemoveContainer" containerID="f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942" Nov 23 08:42:03 crc kubenswrapper[5028]: E1123 08:42:03.815811 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942\": container with ID starting with f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942 not found: ID does not exist" containerID="f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942" Nov 23 08:42:03 crc kubenswrapper[5028]: I1123 08:42:03.815870 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942"} err="failed to get container status \"f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942\": rpc error: code = NotFound desc = could not find container \"f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942\": container with ID starting with f58289b7e0cf494304af5223d718ded4d87b16e14b04b55e9fe2d7b87c70b942 not found: ID does not exist" Nov 23 08:42:05 crc kubenswrapper[5028]: I1123 08:42:05.069403 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30296944-870a-481f-8c63-24755c7e2404" path="/var/lib/kubelet/pods/30296944-870a-481f-8c63-24755c7e2404/volumes" Nov 23 08:42:30 crc kubenswrapper[5028]: I1123 08:42:30.946490 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:42:30 crc kubenswrapper[5028]: I1123 08:42:30.947839 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:42:30 crc kubenswrapper[5028]: I1123 08:42:30.948128 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:42:30 crc kubenswrapper[5028]: I1123 08:42:30.950626 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:42:30 crc kubenswrapper[5028]: I1123 08:42:30.950813 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" gracePeriod=600 Nov 23 08:42:31 crc kubenswrapper[5028]: E1123 08:42:31.078762 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:42:31 crc kubenswrapper[5028]: I1123 08:42:31.998356 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" exitCode=0 Nov 23 08:42:31 crc kubenswrapper[5028]: I1123 08:42:31.998412 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146"} Nov 23 08:42:31 crc kubenswrapper[5028]: I1123 08:42:31.998455 5028 scope.go:117] "RemoveContainer" containerID="831fc7cc0754cd125280584b4847bfdb0bf23ab4a16c900638efc104572da6ce" Nov 23 08:42:31 crc kubenswrapper[5028]: I1123 08:42:31.999753 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:42:32 crc kubenswrapper[5028]: E1123 08:42:32.000481 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:42:44 crc kubenswrapper[5028]: I1123 08:42:44.053350 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:42:44 crc kubenswrapper[5028]: E1123 08:42:44.054355 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.319741 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-dwvr9"] Nov 23 08:42:54 crc kubenswrapper[5028]: E1123 08:42:54.320978 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="extract-content" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.320997 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="extract-content" Nov 23 08:42:54 crc kubenswrapper[5028]: E1123 08:42:54.321039 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="registry-server" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.321045 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="registry-server" Nov 23 08:42:54 crc kubenswrapper[5028]: E1123 08:42:54.321060 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="extract-utilities" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.321067 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="extract-utilities" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.321219 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="30296944-870a-481f-8c63-24755c7e2404" containerName="registry-server" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.321987 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.335815 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dwvr9"] Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.374454 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss767\" (UniqueName: \"kubernetes.io/projected/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-kube-api-access-ss767\") pod \"barbican-db-create-dwvr9\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.374563 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-operator-scripts\") pod \"barbican-db-create-dwvr9\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.416064 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6a52-account-create-ztjqr"] Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.417418 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.419903 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.447326 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6a52-account-create-ztjqr"] Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.477562 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f390829a-d8ff-46e1-b5b9-28f041a8abb6-operator-scripts\") pod \"barbican-6a52-account-create-ztjqr\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.477676 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss767\" (UniqueName: \"kubernetes.io/projected/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-kube-api-access-ss767\") pod \"barbican-db-create-dwvr9\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.477749 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkfs\" (UniqueName: \"kubernetes.io/projected/f390829a-d8ff-46e1-b5b9-28f041a8abb6-kube-api-access-pdkfs\") pod \"barbican-6a52-account-create-ztjqr\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.477820 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-operator-scripts\") pod \"barbican-db-create-dwvr9\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.478916 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-operator-scripts\") pod \"barbican-db-create-dwvr9\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.502090 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss767\" (UniqueName: \"kubernetes.io/projected/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-kube-api-access-ss767\") pod \"barbican-db-create-dwvr9\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.579271 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkfs\" (UniqueName: \"kubernetes.io/projected/f390829a-d8ff-46e1-b5b9-28f041a8abb6-kube-api-access-pdkfs\") pod \"barbican-6a52-account-create-ztjqr\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.579426 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f390829a-d8ff-46e1-b5b9-28f041a8abb6-operator-scripts\") pod \"barbican-6a52-account-create-ztjqr\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.580516 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f390829a-d8ff-46e1-b5b9-28f041a8abb6-operator-scripts\") pod \"barbican-6a52-account-create-ztjqr\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.596438 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkfs\" (UniqueName: \"kubernetes.io/projected/f390829a-d8ff-46e1-b5b9-28f041a8abb6-kube-api-access-pdkfs\") pod \"barbican-6a52-account-create-ztjqr\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.645567 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:54 crc kubenswrapper[5028]: I1123 08:42:54.747562 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:55 crc kubenswrapper[5028]: I1123 08:42:55.166808 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-dwvr9"] Nov 23 08:42:55 crc kubenswrapper[5028]: I1123 08:42:55.245753 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dwvr9" event={"ID":"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0","Type":"ContainerStarted","Data":"14aee4bad34c6e47dccbccdaa3badbcff8d57d858914a0753b2afe927077efd7"} Nov 23 08:42:55 crc kubenswrapper[5028]: I1123 08:42:55.260053 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6a52-account-create-ztjqr"] Nov 23 08:42:55 crc kubenswrapper[5028]: W1123 08:42:55.270229 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf390829a_d8ff_46e1_b5b9_28f041a8abb6.slice/crio-29a228fea06c00dabde0dc00ffd091cbd239bcd08b4f1c53b599ed770d765f9e WatchSource:0}: Error finding container 29a228fea06c00dabde0dc00ffd091cbd239bcd08b4f1c53b599ed770d765f9e: Status 404 returned error can't find the container with id 29a228fea06c00dabde0dc00ffd091cbd239bcd08b4f1c53b599ed770d765f9e Nov 23 08:42:56 crc kubenswrapper[5028]: I1123 08:42:56.262289 5028 generic.go:334] "Generic (PLEG): container finished" podID="f390829a-d8ff-46e1-b5b9-28f041a8abb6" containerID="eea468cbd55fd618afaab7488182319ee7578917e5842eaceb27f26c3dfc9c8c" exitCode=0 Nov 23 08:42:56 crc kubenswrapper[5028]: I1123 08:42:56.262419 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a52-account-create-ztjqr" event={"ID":"f390829a-d8ff-46e1-b5b9-28f041a8abb6","Type":"ContainerDied","Data":"eea468cbd55fd618afaab7488182319ee7578917e5842eaceb27f26c3dfc9c8c"} Nov 23 08:42:56 crc kubenswrapper[5028]: I1123 08:42:56.262999 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a52-account-create-ztjqr" event={"ID":"f390829a-d8ff-46e1-b5b9-28f041a8abb6","Type":"ContainerStarted","Data":"29a228fea06c00dabde0dc00ffd091cbd239bcd08b4f1c53b599ed770d765f9e"} Nov 23 08:42:56 crc kubenswrapper[5028]: I1123 08:42:56.268503 5028 generic.go:334] "Generic (PLEG): container finished" podID="81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" containerID="450e2054e7700cc3989e5198652634753d6eeb36f40a1ce45641c0a9d5714cca" exitCode=0 Nov 23 08:42:56 crc kubenswrapper[5028]: I1123 08:42:56.268632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dwvr9" event={"ID":"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0","Type":"ContainerDied","Data":"450e2054e7700cc3989e5198652634753d6eeb36f40a1ce45641c0a9d5714cca"} Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.063555 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:42:57 crc kubenswrapper[5028]: E1123 08:42:57.064366 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.666603 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.672071 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.740342 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-operator-scripts\") pod \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.740411 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdkfs\" (UniqueName: \"kubernetes.io/projected/f390829a-d8ff-46e1-b5b9-28f041a8abb6-kube-api-access-pdkfs\") pod \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.740503 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss767\" (UniqueName: \"kubernetes.io/projected/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-kube-api-access-ss767\") pod \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\" (UID: \"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0\") " Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.740607 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f390829a-d8ff-46e1-b5b9-28f041a8abb6-operator-scripts\") pod \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\" (UID: \"f390829a-d8ff-46e1-b5b9-28f041a8abb6\") " Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.741265 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" (UID: "81eb5d8f-1624-4544-96ef-f6d7f9f11bc0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.741424 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f390829a-d8ff-46e1-b5b9-28f041a8abb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f390829a-d8ff-46e1-b5b9-28f041a8abb6" (UID: "f390829a-d8ff-46e1-b5b9-28f041a8abb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.746724 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f390829a-d8ff-46e1-b5b9-28f041a8abb6-kube-api-access-pdkfs" (OuterVolumeSpecName: "kube-api-access-pdkfs") pod "f390829a-d8ff-46e1-b5b9-28f041a8abb6" (UID: "f390829a-d8ff-46e1-b5b9-28f041a8abb6"). InnerVolumeSpecName "kube-api-access-pdkfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.746830 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-kube-api-access-ss767" (OuterVolumeSpecName: "kube-api-access-ss767") pod "81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" (UID: "81eb5d8f-1624-4544-96ef-f6d7f9f11bc0"). InnerVolumeSpecName "kube-api-access-ss767". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.843870 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss767\" (UniqueName: \"kubernetes.io/projected/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-kube-api-access-ss767\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.843908 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f390829a-d8ff-46e1-b5b9-28f041a8abb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.843918 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:57 crc kubenswrapper[5028]: I1123 08:42:57.843927 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdkfs\" (UniqueName: \"kubernetes.io/projected/f390829a-d8ff-46e1-b5b9-28f041a8abb6-kube-api-access-pdkfs\") on node \"crc\" DevicePath \"\"" Nov 23 08:42:58 crc kubenswrapper[5028]: I1123 08:42:58.289824 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6a52-account-create-ztjqr" Nov 23 08:42:58 crc kubenswrapper[5028]: I1123 08:42:58.289818 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6a52-account-create-ztjqr" event={"ID":"f390829a-d8ff-46e1-b5b9-28f041a8abb6","Type":"ContainerDied","Data":"29a228fea06c00dabde0dc00ffd091cbd239bcd08b4f1c53b599ed770d765f9e"} Nov 23 08:42:58 crc kubenswrapper[5028]: I1123 08:42:58.290394 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a228fea06c00dabde0dc00ffd091cbd239bcd08b4f1c53b599ed770d765f9e" Nov 23 08:42:58 crc kubenswrapper[5028]: I1123 08:42:58.294300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-dwvr9" event={"ID":"81eb5d8f-1624-4544-96ef-f6d7f9f11bc0","Type":"ContainerDied","Data":"14aee4bad34c6e47dccbccdaa3badbcff8d57d858914a0753b2afe927077efd7"} Nov 23 08:42:58 crc kubenswrapper[5028]: I1123 08:42:58.294392 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14aee4bad34c6e47dccbccdaa3badbcff8d57d858914a0753b2afe927077efd7" Nov 23 08:42:58 crc kubenswrapper[5028]: I1123 08:42:58.294486 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-dwvr9" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.698574 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xpnns"] Nov 23 08:42:59 crc kubenswrapper[5028]: E1123 08:42:59.699602 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f390829a-d8ff-46e1-b5b9-28f041a8abb6" containerName="mariadb-account-create" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.699623 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f390829a-d8ff-46e1-b5b9-28f041a8abb6" containerName="mariadb-account-create" Nov 23 08:42:59 crc kubenswrapper[5028]: E1123 08:42:59.699651 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" containerName="mariadb-database-create" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.699661 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" containerName="mariadb-database-create" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.699864 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f390829a-d8ff-46e1-b5b9-28f041a8abb6" containerName="mariadb-account-create" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.699897 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" containerName="mariadb-database-create" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.700650 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.703544 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.704798 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zb6kq" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.714913 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xpnns"] Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.779310 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdjn\" (UniqueName: \"kubernetes.io/projected/e55929aa-03b3-4a01-9931-5cd72deae4c5-kube-api-access-zrdjn\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.779669 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-combined-ca-bundle\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.779758 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-db-sync-config-data\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.882108 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdjn\" (UniqueName: \"kubernetes.io/projected/e55929aa-03b3-4a01-9931-5cd72deae4c5-kube-api-access-zrdjn\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.882256 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-combined-ca-bundle\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.882287 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-db-sync-config-data\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.894998 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-db-sync-config-data\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.903537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-combined-ca-bundle\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:42:59 crc kubenswrapper[5028]: I1123 08:42:59.907400 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdjn\" (UniqueName: \"kubernetes.io/projected/e55929aa-03b3-4a01-9931-5cd72deae4c5-kube-api-access-zrdjn\") pod \"barbican-db-sync-xpnns\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " pod="openstack/barbican-db-sync-xpnns" Nov 23 08:43:00 crc kubenswrapper[5028]: I1123 08:43:00.026737 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpnns" Nov 23 08:43:00 crc kubenswrapper[5028]: I1123 08:43:00.560169 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xpnns"] Nov 23 08:43:01 crc kubenswrapper[5028]: I1123 08:43:01.325044 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpnns" event={"ID":"e55929aa-03b3-4a01-9931-5cd72deae4c5","Type":"ContainerStarted","Data":"b74fa617bac18740590a7eb73969a9506083316f05b23a9795e1ca1a17b459ab"} Nov 23 08:43:05 crc kubenswrapper[5028]: I1123 08:43:05.365230 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpnns" event={"ID":"e55929aa-03b3-4a01-9931-5cd72deae4c5","Type":"ContainerStarted","Data":"8a82c1c7d81135ee9627db662020cfb566d9d4b3020d4649fb82f53566334da1"} Nov 23 08:43:06 crc kubenswrapper[5028]: I1123 08:43:06.405007 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xpnns" podStartSLOduration=2.837223978 podStartE2EDuration="7.404985955s" podCreationTimestamp="2025-11-23 08:42:59 +0000 UTC" firstStartedPulling="2025-11-23 08:43:00.574050594 +0000 UTC m=+6764.271455393" lastFinishedPulling="2025-11-23 08:43:05.141812591 +0000 UTC m=+6768.839217370" observedRunningTime="2025-11-23 08:43:06.398181248 +0000 UTC m=+6770.095586027" watchObservedRunningTime="2025-11-23 08:43:06.404985955 +0000 UTC m=+6770.102390734" Nov 23 08:43:08 crc kubenswrapper[5028]: I1123 08:43:08.397852 5028 generic.go:334] "Generic (PLEG): container finished" podID="e55929aa-03b3-4a01-9931-5cd72deae4c5" containerID="8a82c1c7d81135ee9627db662020cfb566d9d4b3020d4649fb82f53566334da1" exitCode=0 Nov 23 08:43:08 crc kubenswrapper[5028]: I1123 08:43:08.397906 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpnns" event={"ID":"e55929aa-03b3-4a01-9931-5cd72deae4c5","Type":"ContainerDied","Data":"8a82c1c7d81135ee9627db662020cfb566d9d4b3020d4649fb82f53566334da1"} Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.053610 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:43:09 crc kubenswrapper[5028]: E1123 08:43:09.054394 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.844935 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpnns" Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.975194 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-db-sync-config-data\") pod \"e55929aa-03b3-4a01-9931-5cd72deae4c5\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.975465 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-combined-ca-bundle\") pod \"e55929aa-03b3-4a01-9931-5cd72deae4c5\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.975572 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdjn\" (UniqueName: \"kubernetes.io/projected/e55929aa-03b3-4a01-9931-5cd72deae4c5-kube-api-access-zrdjn\") pod \"e55929aa-03b3-4a01-9931-5cd72deae4c5\" (UID: \"e55929aa-03b3-4a01-9931-5cd72deae4c5\") " Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.985855 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e55929aa-03b3-4a01-9931-5cd72deae4c5" (UID: "e55929aa-03b3-4a01-9931-5cd72deae4c5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:43:09 crc kubenswrapper[5028]: I1123 08:43:09.985974 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55929aa-03b3-4a01-9931-5cd72deae4c5-kube-api-access-zrdjn" (OuterVolumeSpecName: "kube-api-access-zrdjn") pod "e55929aa-03b3-4a01-9931-5cd72deae4c5" (UID: "e55929aa-03b3-4a01-9931-5cd72deae4c5"). InnerVolumeSpecName "kube-api-access-zrdjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.029818 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e55929aa-03b3-4a01-9931-5cd72deae4c5" (UID: "e55929aa-03b3-4a01-9931-5cd72deae4c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.077276 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.077330 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdjn\" (UniqueName: \"kubernetes.io/projected/e55929aa-03b3-4a01-9931-5cd72deae4c5-kube-api-access-zrdjn\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.077345 5028 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e55929aa-03b3-4a01-9931-5cd72deae4c5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.427820 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xpnns" event={"ID":"e55929aa-03b3-4a01-9931-5cd72deae4c5","Type":"ContainerDied","Data":"b74fa617bac18740590a7eb73969a9506083316f05b23a9795e1ca1a17b459ab"} Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.427902 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b74fa617bac18740590a7eb73969a9506083316f05b23a9795e1ca1a17b459ab" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.428025 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xpnns" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.694462 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8b8c9c8f4-mvckm"] Nov 23 08:43:10 crc kubenswrapper[5028]: E1123 08:43:10.695072 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55929aa-03b3-4a01-9931-5cd72deae4c5" containerName="barbican-db-sync" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.695100 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55929aa-03b3-4a01-9931-5cd72deae4c5" containerName="barbican-db-sync" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.695386 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55929aa-03b3-4a01-9931-5cd72deae4c5" containerName="barbican-db-sync" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.696670 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.699486 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zb6kq" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.699771 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.704191 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.719781 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58c79c4ff5-mls6f"] Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.721713 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.723987 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.731642 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8b8c9c8f4-mvckm"] Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.741968 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58c79c4ff5-mls6f"] Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.811964 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9497d7c6f-wbhg4"] Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.813492 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.829817 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9497d7c6f-wbhg4"] Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.896898 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d4e170-a8f0-4e35-8db5-edd058b05027-logs\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.896993 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjg4\" (UniqueName: \"kubernetes.io/projected/e6d4e170-a8f0-4e35-8db5-edd058b05027-kube-api-access-vpjg4\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897032 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-dns-svc\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897054 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-combined-ca-bundle\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897074 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-config\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897098 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-sb\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897149 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-nb\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897216 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-config-data\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897255 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-config-data-custom\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897282 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-combined-ca-bundle\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897339 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c24efbd-75c6-4233-86d9-6b04095d8bad-logs\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897371 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw56z\" (UniqueName: \"kubernetes.io/projected/5c24efbd-75c6-4233-86d9-6b04095d8bad-kube-api-access-cw56z\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897395 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-config-data\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.897415 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-config-data-custom\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.930791 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57cd499dd6-rvkzk"] Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.935272 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.938090 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 23 08:43:10 crc kubenswrapper[5028]: I1123 08:43:10.943585 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57cd499dd6-rvkzk"] Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000579 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c24efbd-75c6-4233-86d9-6b04095d8bad-logs\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000661 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw56z\" (UniqueName: \"kubernetes.io/projected/5c24efbd-75c6-4233-86d9-6b04095d8bad-kube-api-access-cw56z\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000703 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-config-data\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000722 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-config-data-custom\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000785 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d4e170-a8f0-4e35-8db5-edd058b05027-logs\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000802 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjg4\" (UniqueName: \"kubernetes.io/projected/e6d4e170-a8f0-4e35-8db5-edd058b05027-kube-api-access-vpjg4\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000843 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-config-data-custom\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000883 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-combined-ca-bundle\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000903 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-dns-svc\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000927 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-config\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.000977 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-sb\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.001030 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9zp\" (UniqueName: \"kubernetes.io/projected/f4191463-b4c9-4d75-b00e-853e28f4ec88-kube-api-access-zf9zp\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.001063 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4191463-b4c9-4d75-b00e-853e28f4ec88-logs\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.001129 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-nb\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.001189 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459qq\" (UniqueName: \"kubernetes.io/projected/1e02e1ff-61b6-4890-aa5e-65ba5696b271-kube-api-access-459qq\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.001204 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c24efbd-75c6-4233-86d9-6b04095d8bad-logs\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002129 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-dns-svc\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002172 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-nb\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002231 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-config\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002472 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d4e170-a8f0-4e35-8db5-edd058b05027-logs\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002632 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-config-data\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002675 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-config-data\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002724 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-config-data-custom\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002767 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-combined-ca-bundle\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.002880 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-combined-ca-bundle\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.003292 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-sb\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.011781 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-config-data-custom\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.012150 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-config-data\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.015519 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-combined-ca-bundle\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.017892 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-config-data-custom\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.020561 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjg4\" (UniqueName: \"kubernetes.io/projected/e6d4e170-a8f0-4e35-8db5-edd058b05027-kube-api-access-vpjg4\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.026548 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d4e170-a8f0-4e35-8db5-edd058b05027-combined-ca-bundle\") pod \"barbican-keystone-listener-8b8c9c8f4-mvckm\" (UID: \"e6d4e170-a8f0-4e35-8db5-edd058b05027\") " pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.028092 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.028178 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c24efbd-75c6-4233-86d9-6b04095d8bad-config-data\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.028463 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw56z\" (UniqueName: \"kubernetes.io/projected/5c24efbd-75c6-4233-86d9-6b04095d8bad-kube-api-access-cw56z\") pod \"barbican-worker-58c79c4ff5-mls6f\" (UID: \"5c24efbd-75c6-4233-86d9-6b04095d8bad\") " pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.043396 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c79c4ff5-mls6f" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.112158 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9zp\" (UniqueName: \"kubernetes.io/projected/f4191463-b4c9-4d75-b00e-853e28f4ec88-kube-api-access-zf9zp\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.112274 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4191463-b4c9-4d75-b00e-853e28f4ec88-logs\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.112440 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-459qq\" (UniqueName: \"kubernetes.io/projected/1e02e1ff-61b6-4890-aa5e-65ba5696b271-kube-api-access-459qq\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.112626 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-config-data\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.112819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-combined-ca-bundle\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.113070 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-config-data-custom\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.115776 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4191463-b4c9-4d75-b00e-853e28f4ec88-logs\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.124397 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-combined-ca-bundle\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.124824 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-config-data-custom\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.126074 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4191463-b4c9-4d75-b00e-853e28f4ec88-config-data\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.138367 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9zp\" (UniqueName: \"kubernetes.io/projected/f4191463-b4c9-4d75-b00e-853e28f4ec88-kube-api-access-zf9zp\") pod \"barbican-api-57cd499dd6-rvkzk\" (UID: \"f4191463-b4c9-4d75-b00e-853e28f4ec88\") " pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.138488 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-459qq\" (UniqueName: \"kubernetes.io/projected/1e02e1ff-61b6-4890-aa5e-65ba5696b271-kube-api-access-459qq\") pod \"dnsmasq-dns-9497d7c6f-wbhg4\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.142471 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.256918 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.632666 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58c79c4ff5-mls6f"] Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.656719 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9497d7c6f-wbhg4"] Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.709003 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8b8c9c8f4-mvckm"] Nov 23 08:43:11 crc kubenswrapper[5028]: I1123 08:43:11.872914 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57cd499dd6-rvkzk"] Nov 23 08:43:11 crc kubenswrapper[5028]: W1123 08:43:11.888915 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4191463_b4c9_4d75_b00e_853e28f4ec88.slice/crio-067ec828e6a03e881161aa8030131fda4e4fe05242f30aa20ae3e74db7cff782 WatchSource:0}: Error finding container 067ec828e6a03e881161aa8030131fda4e4fe05242f30aa20ae3e74db7cff782: Status 404 returned error can't find the container with id 067ec828e6a03e881161aa8030131fda4e4fe05242f30aa20ae3e74db7cff782 Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.450447 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cd499dd6-rvkzk" event={"ID":"f4191463-b4c9-4d75-b00e-853e28f4ec88","Type":"ContainerStarted","Data":"d62e51bfeda6903fc5e2cd03c4d0c8b2d496365ee4bb3575c63a95332caa4273"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.450941 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cd499dd6-rvkzk" event={"ID":"f4191463-b4c9-4d75-b00e-853e28f4ec88","Type":"ContainerStarted","Data":"a1ce889c8241550db14ccd23c2ea53087717c932cc0e2e1d92beb5b426a0b180"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.451025 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57cd499dd6-rvkzk" event={"ID":"f4191463-b4c9-4d75-b00e-853e28f4ec88","Type":"ContainerStarted","Data":"067ec828e6a03e881161aa8030131fda4e4fe05242f30aa20ae3e74db7cff782"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.452023 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.452047 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.454429 5028 generic.go:334] "Generic (PLEG): container finished" podID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerID="b0382a93dc7b5337bd821db1219045b35681e9b07b42e28380c5003c53f7581f" exitCode=0 Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.454520 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" event={"ID":"1e02e1ff-61b6-4890-aa5e-65ba5696b271","Type":"ContainerDied","Data":"b0382a93dc7b5337bd821db1219045b35681e9b07b42e28380c5003c53f7581f"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.454539 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" event={"ID":"1e02e1ff-61b6-4890-aa5e-65ba5696b271","Type":"ContainerStarted","Data":"b6396df7067f1ffad3589416d1c78e3b379ced9b200566156b2641258ca1ceac"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.457271 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c79c4ff5-mls6f" event={"ID":"5c24efbd-75c6-4233-86d9-6b04095d8bad","Type":"ContainerStarted","Data":"8b5370ee1c1212b45e7acf3f144d828c710c92ddb06c3d6f500929223e16c302"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.459901 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" event={"ID":"e6d4e170-a8f0-4e35-8db5-edd058b05027","Type":"ContainerStarted","Data":"be59d40458a4be54b883ae110e23be58ae3432d52396e7c37d2aaac8240dddf4"} Nov 23 08:43:12 crc kubenswrapper[5028]: I1123 08:43:12.487797 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57cd499dd6-rvkzk" podStartSLOduration=2.487770746 podStartE2EDuration="2.487770746s" podCreationTimestamp="2025-11-23 08:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:12.472766269 +0000 UTC m=+6776.170171048" watchObservedRunningTime="2025-11-23 08:43:12.487770746 +0000 UTC m=+6776.185175535" Nov 23 08:43:13 crc kubenswrapper[5028]: I1123 08:43:13.471818 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" event={"ID":"1e02e1ff-61b6-4890-aa5e-65ba5696b271","Type":"ContainerStarted","Data":"1971a3d9447b870a53769834b7d2d083779994565ad3e938d0d1cdb62d593a14"} Nov 23 08:43:13 crc kubenswrapper[5028]: I1123 08:43:13.472614 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:13 crc kubenswrapper[5028]: I1123 08:43:13.474789 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c79c4ff5-mls6f" event={"ID":"5c24efbd-75c6-4233-86d9-6b04095d8bad","Type":"ContainerStarted","Data":"69161ae0defc5431254a3d78be2225c2df60bcc5c6a8ab0bba8103388ec2aeba"} Nov 23 08:43:13 crc kubenswrapper[5028]: I1123 08:43:13.476347 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" event={"ID":"e6d4e170-a8f0-4e35-8db5-edd058b05027","Type":"ContainerStarted","Data":"683a18d0193cb7c07e9faea6f10b4a39847191eb824bfa2f4c2c88e1dfcecba7"} Nov 23 08:43:13 crc kubenswrapper[5028]: I1123 08:43:13.503528 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" podStartSLOduration=3.503502617 podStartE2EDuration="3.503502617s" podCreationTimestamp="2025-11-23 08:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:13.495755418 +0000 UTC m=+6777.193160217" watchObservedRunningTime="2025-11-23 08:43:13.503502617 +0000 UTC m=+6777.200907396" Nov 23 08:43:14 crc kubenswrapper[5028]: I1123 08:43:14.489890 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" event={"ID":"e6d4e170-a8f0-4e35-8db5-edd058b05027","Type":"ContainerStarted","Data":"bedd386278bd629d501e8704958520c31e4be1202445e196c9fcb27bbe7db302"} Nov 23 08:43:14 crc kubenswrapper[5028]: I1123 08:43:14.493647 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c79c4ff5-mls6f" event={"ID":"5c24efbd-75c6-4233-86d9-6b04095d8bad","Type":"ContainerStarted","Data":"c9e4949ad77e3cafdc9f15f54e7a9d6d0509704660d65129e91781eb3416d61b"} Nov 23 08:43:14 crc kubenswrapper[5028]: I1123 08:43:14.514061 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8b8c9c8f4-mvckm" podStartSLOduration=3.266125203 podStartE2EDuration="4.514035093s" podCreationTimestamp="2025-11-23 08:43:10 +0000 UTC" firstStartedPulling="2025-11-23 08:43:11.720216363 +0000 UTC m=+6775.417621142" lastFinishedPulling="2025-11-23 08:43:12.968126253 +0000 UTC m=+6776.665531032" observedRunningTime="2025-11-23 08:43:14.506203361 +0000 UTC m=+6778.203608140" watchObservedRunningTime="2025-11-23 08:43:14.514035093 +0000 UTC m=+6778.211439872" Nov 23 08:43:14 crc kubenswrapper[5028]: I1123 08:43:14.526919 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58c79c4ff5-mls6f" podStartSLOduration=3.198549721 podStartE2EDuration="4.526903668s" podCreationTimestamp="2025-11-23 08:43:10 +0000 UTC" firstStartedPulling="2025-11-23 08:43:11.638187887 +0000 UTC m=+6775.335592666" lastFinishedPulling="2025-11-23 08:43:12.966541834 +0000 UTC m=+6776.663946613" observedRunningTime="2025-11-23 08:43:14.526038596 +0000 UTC m=+6778.223443375" watchObservedRunningTime="2025-11-23 08:43:14.526903668 +0000 UTC m=+6778.224308447" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.055466 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:43:21 crc kubenswrapper[5028]: E1123 08:43:21.057228 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.144279 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.249341 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68445c757c-lwxlx"] Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.250058 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerName="dnsmasq-dns" containerID="cri-o://6880882d84c2eceb815d2809c664850d6ab07ade76d246fab4eff968bb98d2e3" gracePeriod=10 Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.574966 5028 generic.go:334] "Generic (PLEG): container finished" podID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerID="6880882d84c2eceb815d2809c664850d6ab07ade76d246fab4eff968bb98d2e3" exitCode=0 Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.575025 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" event={"ID":"86f6e1f2-208a-47dd-ac12-14843ddf0d7a","Type":"ContainerDied","Data":"6880882d84c2eceb815d2809c664850d6ab07ade76d246fab4eff968bb98d2e3"} Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.829073 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.887826 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-nb\") pod \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.887879 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-config\") pod \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.887990 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-sb\") pod \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.888142 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-dns-svc\") pod \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.888224 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6kv9\" (UniqueName: \"kubernetes.io/projected/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-kube-api-access-q6kv9\") pod \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\" (UID: \"86f6e1f2-208a-47dd-ac12-14843ddf0d7a\") " Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.896681 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-kube-api-access-q6kv9" (OuterVolumeSpecName: "kube-api-access-q6kv9") pod "86f6e1f2-208a-47dd-ac12-14843ddf0d7a" (UID: "86f6e1f2-208a-47dd-ac12-14843ddf0d7a"). InnerVolumeSpecName "kube-api-access-q6kv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.943542 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-config" (OuterVolumeSpecName: "config") pod "86f6e1f2-208a-47dd-ac12-14843ddf0d7a" (UID: "86f6e1f2-208a-47dd-ac12-14843ddf0d7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.944746 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86f6e1f2-208a-47dd-ac12-14843ddf0d7a" (UID: "86f6e1f2-208a-47dd-ac12-14843ddf0d7a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.949886 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86f6e1f2-208a-47dd-ac12-14843ddf0d7a" (UID: "86f6e1f2-208a-47dd-ac12-14843ddf0d7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.960196 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86f6e1f2-208a-47dd-ac12-14843ddf0d7a" (UID: "86f6e1f2-208a-47dd-ac12-14843ddf0d7a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.990612 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.990652 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.990662 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.990673 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:21 crc kubenswrapper[5028]: I1123 08:43:21.990684 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6kv9\" (UniqueName: \"kubernetes.io/projected/86f6e1f2-208a-47dd-ac12-14843ddf0d7a-kube-api-access-q6kv9\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.590805 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" event={"ID":"86f6e1f2-208a-47dd-ac12-14843ddf0d7a","Type":"ContainerDied","Data":"39f8c01db36afef937d364875738905019083770ac88a515bbc14f061488ad60"} Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.590997 5028 scope.go:117] "RemoveContainer" containerID="6880882d84c2eceb815d2809c664850d6ab07ade76d246fab4eff968bb98d2e3" Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.591595 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68445c757c-lwxlx" Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.616671 5028 scope.go:117] "RemoveContainer" containerID="7ab45db4e29f738b232f9b2f82eba1ad465de876c87c93c2c30ee61a4d3de7dc" Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.652142 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68445c757c-lwxlx"] Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.660795 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68445c757c-lwxlx"] Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.743080 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:22 crc kubenswrapper[5028]: I1123 08:43:22.759895 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57cd499dd6-rvkzk" Nov 23 08:43:23 crc kubenswrapper[5028]: I1123 08:43:23.066744 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" path="/var/lib/kubelet/pods/86f6e1f2-208a-47dd-ac12-14843ddf0d7a/volumes" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.885360 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xr55r"] Nov 23 08:43:29 crc kubenswrapper[5028]: E1123 08:43:29.886645 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerName="dnsmasq-dns" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.886667 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerName="dnsmasq-dns" Nov 23 08:43:29 crc kubenswrapper[5028]: E1123 08:43:29.886696 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerName="init" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.886705 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerName="init" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.886989 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f6e1f2-208a-47dd-ac12-14843ddf0d7a" containerName="dnsmasq-dns" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.887745 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.906925 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xr55r"] Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.961762 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzxt\" (UniqueName: \"kubernetes.io/projected/9067c25f-b237-4cd9-9077-3b6aad08551b-kube-api-access-hgzxt\") pod \"neutron-db-create-xr55r\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.961926 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9067c25f-b237-4cd9-9077-3b6aad08551b-operator-scripts\") pod \"neutron-db-create-xr55r\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.992273 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bed-account-create-q2wl9"] Nov 23 08:43:29 crc kubenswrapper[5028]: I1123 08:43:29.993869 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.000747 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.006047 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bed-account-create-q2wl9"] Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.063682 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzxt\" (UniqueName: \"kubernetes.io/projected/9067c25f-b237-4cd9-9077-3b6aad08551b-kube-api-access-hgzxt\") pod \"neutron-db-create-xr55r\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.064532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9067c25f-b237-4cd9-9077-3b6aad08551b-operator-scripts\") pod \"neutron-db-create-xr55r\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.065669 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9067c25f-b237-4cd9-9077-3b6aad08551b-operator-scripts\") pod \"neutron-db-create-xr55r\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.088079 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzxt\" (UniqueName: \"kubernetes.io/projected/9067c25f-b237-4cd9-9077-3b6aad08551b-kube-api-access-hgzxt\") pod \"neutron-db-create-xr55r\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.167540 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scddv\" (UniqueName: \"kubernetes.io/projected/ea5dd3b9-5a4f-4687-9776-683799bf06dd-kube-api-access-scddv\") pod \"neutron-5bed-account-create-q2wl9\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.168863 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5dd3b9-5a4f-4687-9776-683799bf06dd-operator-scripts\") pod \"neutron-5bed-account-create-q2wl9\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.218281 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.271466 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5dd3b9-5a4f-4687-9776-683799bf06dd-operator-scripts\") pod \"neutron-5bed-account-create-q2wl9\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.271595 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scddv\" (UniqueName: \"kubernetes.io/projected/ea5dd3b9-5a4f-4687-9776-683799bf06dd-kube-api-access-scddv\") pod \"neutron-5bed-account-create-q2wl9\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.272255 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5dd3b9-5a4f-4687-9776-683799bf06dd-operator-scripts\") pod \"neutron-5bed-account-create-q2wl9\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.292411 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scddv\" (UniqueName: \"kubernetes.io/projected/ea5dd3b9-5a4f-4687-9776-683799bf06dd-kube-api-access-scddv\") pod \"neutron-5bed-account-create-q2wl9\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.319460 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.671922 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xr55r"] Nov 23 08:43:30 crc kubenswrapper[5028]: W1123 08:43:30.677596 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9067c25f_b237_4cd9_9077_3b6aad08551b.slice/crio-04ca8b32ef9006235e16a1003f40f2b29f200d17d978432d7818cf8294ad357c WatchSource:0}: Error finding container 04ca8b32ef9006235e16a1003f40f2b29f200d17d978432d7818cf8294ad357c: Status 404 returned error can't find the container with id 04ca8b32ef9006235e16a1003f40f2b29f200d17d978432d7818cf8294ad357c Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.692249 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xr55r" event={"ID":"9067c25f-b237-4cd9-9077-3b6aad08551b","Type":"ContainerStarted","Data":"04ca8b32ef9006235e16a1003f40f2b29f200d17d978432d7818cf8294ad357c"} Nov 23 08:43:30 crc kubenswrapper[5028]: I1123 08:43:30.791100 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bed-account-create-q2wl9"] Nov 23 08:43:30 crc kubenswrapper[5028]: W1123 08:43:30.799707 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea5dd3b9_5a4f_4687_9776_683799bf06dd.slice/crio-f81529848bac1572e302cf1a375dd1adb6abe90e771668242683ed046b54df1d WatchSource:0}: Error finding container f81529848bac1572e302cf1a375dd1adb6abe90e771668242683ed046b54df1d: Status 404 returned error can't find the container with id f81529848bac1572e302cf1a375dd1adb6abe90e771668242683ed046b54df1d Nov 23 08:43:31 crc kubenswrapper[5028]: I1123 08:43:31.706420 5028 generic.go:334] "Generic (PLEG): container finished" podID="9067c25f-b237-4cd9-9077-3b6aad08551b" containerID="8b38f5e69c321e10d3c96deb399e79ef7e0e2a4ee8ba4d16159c26a7ae10c244" exitCode=0 Nov 23 08:43:31 crc kubenswrapper[5028]: I1123 08:43:31.706478 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xr55r" event={"ID":"9067c25f-b237-4cd9-9077-3b6aad08551b","Type":"ContainerDied","Data":"8b38f5e69c321e10d3c96deb399e79ef7e0e2a4ee8ba4d16159c26a7ae10c244"} Nov 23 08:43:31 crc kubenswrapper[5028]: I1123 08:43:31.713201 5028 generic.go:334] "Generic (PLEG): container finished" podID="ea5dd3b9-5a4f-4687-9776-683799bf06dd" containerID="b2070a992faec4113ffac7548b341f74f9e46f23408a06b0fe5177c4e7175e1d" exitCode=0 Nov 23 08:43:31 crc kubenswrapper[5028]: I1123 08:43:31.713279 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bed-account-create-q2wl9" event={"ID":"ea5dd3b9-5a4f-4687-9776-683799bf06dd","Type":"ContainerDied","Data":"b2070a992faec4113ffac7548b341f74f9e46f23408a06b0fe5177c4e7175e1d"} Nov 23 08:43:31 crc kubenswrapper[5028]: I1123 08:43:31.713323 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bed-account-create-q2wl9" event={"ID":"ea5dd3b9-5a4f-4687-9776-683799bf06dd","Type":"ContainerStarted","Data":"f81529848bac1572e302cf1a375dd1adb6abe90e771668242683ed046b54df1d"} Nov 23 08:43:32 crc kubenswrapper[5028]: I1123 08:43:32.053721 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:43:32 crc kubenswrapper[5028]: E1123 08:43:32.054215 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.154017 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.162777 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.334234 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzxt\" (UniqueName: \"kubernetes.io/projected/9067c25f-b237-4cd9-9077-3b6aad08551b-kube-api-access-hgzxt\") pod \"9067c25f-b237-4cd9-9077-3b6aad08551b\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.334429 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9067c25f-b237-4cd9-9077-3b6aad08551b-operator-scripts\") pod \"9067c25f-b237-4cd9-9077-3b6aad08551b\" (UID: \"9067c25f-b237-4cd9-9077-3b6aad08551b\") " Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.334453 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5dd3b9-5a4f-4687-9776-683799bf06dd-operator-scripts\") pod \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.334513 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scddv\" (UniqueName: \"kubernetes.io/projected/ea5dd3b9-5a4f-4687-9776-683799bf06dd-kube-api-access-scddv\") pod \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\" (UID: \"ea5dd3b9-5a4f-4687-9776-683799bf06dd\") " Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.335494 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9067c25f-b237-4cd9-9077-3b6aad08551b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9067c25f-b237-4cd9-9077-3b6aad08551b" (UID: "9067c25f-b237-4cd9-9077-3b6aad08551b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.335609 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea5dd3b9-5a4f-4687-9776-683799bf06dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea5dd3b9-5a4f-4687-9776-683799bf06dd" (UID: "ea5dd3b9-5a4f-4687-9776-683799bf06dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.343180 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5dd3b9-5a4f-4687-9776-683799bf06dd-kube-api-access-scddv" (OuterVolumeSpecName: "kube-api-access-scddv") pod "ea5dd3b9-5a4f-4687-9776-683799bf06dd" (UID: "ea5dd3b9-5a4f-4687-9776-683799bf06dd"). InnerVolumeSpecName "kube-api-access-scddv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.343331 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9067c25f-b237-4cd9-9077-3b6aad08551b-kube-api-access-hgzxt" (OuterVolumeSpecName: "kube-api-access-hgzxt") pod "9067c25f-b237-4cd9-9077-3b6aad08551b" (UID: "9067c25f-b237-4cd9-9077-3b6aad08551b"). InnerVolumeSpecName "kube-api-access-hgzxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.446047 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgzxt\" (UniqueName: \"kubernetes.io/projected/9067c25f-b237-4cd9-9077-3b6aad08551b-kube-api-access-hgzxt\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.446096 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9067c25f-b237-4cd9-9077-3b6aad08551b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.446108 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea5dd3b9-5a4f-4687-9776-683799bf06dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.446117 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scddv\" (UniqueName: \"kubernetes.io/projected/ea5dd3b9-5a4f-4687-9776-683799bf06dd-kube-api-access-scddv\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.734438 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xr55r" event={"ID":"9067c25f-b237-4cd9-9077-3b6aad08551b","Type":"ContainerDied","Data":"04ca8b32ef9006235e16a1003f40f2b29f200d17d978432d7818cf8294ad357c"} Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.734485 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ca8b32ef9006235e16a1003f40f2b29f200d17d978432d7818cf8294ad357c" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.734891 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xr55r" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.738202 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bed-account-create-q2wl9" event={"ID":"ea5dd3b9-5a4f-4687-9776-683799bf06dd","Type":"ContainerDied","Data":"f81529848bac1572e302cf1a375dd1adb6abe90e771668242683ed046b54df1d"} Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.738247 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f81529848bac1572e302cf1a375dd1adb6abe90e771668242683ed046b54df1d" Nov 23 08:43:33 crc kubenswrapper[5028]: I1123 08:43:33.738330 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bed-account-create-q2wl9" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.440615 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pfnxd"] Nov 23 08:43:35 crc kubenswrapper[5028]: E1123 08:43:35.441556 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5dd3b9-5a4f-4687-9776-683799bf06dd" containerName="mariadb-account-create" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.441571 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5dd3b9-5a4f-4687-9776-683799bf06dd" containerName="mariadb-account-create" Nov 23 08:43:35 crc kubenswrapper[5028]: E1123 08:43:35.441584 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9067c25f-b237-4cd9-9077-3b6aad08551b" containerName="mariadb-database-create" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.441590 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9067c25f-b237-4cd9-9077-3b6aad08551b" containerName="mariadb-database-create" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.441858 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5dd3b9-5a4f-4687-9776-683799bf06dd" containerName="mariadb-account-create" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.441881 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9067c25f-b237-4cd9-9077-3b6aad08551b" containerName="mariadb-database-create" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.442788 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.448584 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.448937 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5zq5m" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.449250 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 08:43:35 crc kubenswrapper[5028]: I1123 08:43:35.455848 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pfnxd"] Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.149787 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-www9z\" (UniqueName: \"kubernetes.io/projected/3b30b1ec-c387-488e-9175-fbc068279c73-kube-api-access-www9z\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.150028 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-combined-ca-bundle\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.150127 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-config\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.251896 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-www9z\" (UniqueName: \"kubernetes.io/projected/3b30b1ec-c387-488e-9175-fbc068279c73-kube-api-access-www9z\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.252038 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-combined-ca-bundle\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.252093 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-config\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.260709 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-config\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.269795 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-www9z\" (UniqueName: \"kubernetes.io/projected/3b30b1ec-c387-488e-9175-fbc068279c73-kube-api-access-www9z\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.272047 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-combined-ca-bundle\") pod \"neutron-db-sync-pfnxd\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.369770 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:36 crc kubenswrapper[5028]: I1123 08:43:36.894066 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pfnxd"] Nov 23 08:43:36 crc kubenswrapper[5028]: W1123 08:43:36.897237 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b30b1ec_c387_488e_9175_fbc068279c73.slice/crio-ff6030a2245dd4482ea2bf715212566963f7397130b26d0da061963622ed95e0 WatchSource:0}: Error finding container ff6030a2245dd4482ea2bf715212566963f7397130b26d0da061963622ed95e0: Status 404 returned error can't find the container with id ff6030a2245dd4482ea2bf715212566963f7397130b26d0da061963622ed95e0 Nov 23 08:43:37 crc kubenswrapper[5028]: I1123 08:43:37.089986 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfnxd" event={"ID":"3b30b1ec-c387-488e-9175-fbc068279c73","Type":"ContainerStarted","Data":"ff6030a2245dd4482ea2bf715212566963f7397130b26d0da061963622ed95e0"} Nov 23 08:43:38 crc kubenswrapper[5028]: I1123 08:43:38.103512 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfnxd" event={"ID":"3b30b1ec-c387-488e-9175-fbc068279c73","Type":"ContainerStarted","Data":"9ca5076792e782196270b773502415dc21d34469a04aed6057a0a9880a88be98"} Nov 23 08:43:38 crc kubenswrapper[5028]: I1123 08:43:38.128437 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pfnxd" podStartSLOduration=3.128411745 podStartE2EDuration="3.128411745s" podCreationTimestamp="2025-11-23 08:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:38.119897167 +0000 UTC m=+6801.817301946" watchObservedRunningTime="2025-11-23 08:43:38.128411745 +0000 UTC m=+6801.825816524" Nov 23 08:43:42 crc kubenswrapper[5028]: I1123 08:43:42.156803 5028 generic.go:334] "Generic (PLEG): container finished" podID="3b30b1ec-c387-488e-9175-fbc068279c73" containerID="9ca5076792e782196270b773502415dc21d34469a04aed6057a0a9880a88be98" exitCode=0 Nov 23 08:43:42 crc kubenswrapper[5028]: I1123 08:43:42.156863 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfnxd" event={"ID":"3b30b1ec-c387-488e-9175-fbc068279c73","Type":"ContainerDied","Data":"9ca5076792e782196270b773502415dc21d34469a04aed6057a0a9880a88be98"} Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.517002 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.615452 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-www9z\" (UniqueName: \"kubernetes.io/projected/3b30b1ec-c387-488e-9175-fbc068279c73-kube-api-access-www9z\") pod \"3b30b1ec-c387-488e-9175-fbc068279c73\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.615564 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-combined-ca-bundle\") pod \"3b30b1ec-c387-488e-9175-fbc068279c73\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.615687 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-config\") pod \"3b30b1ec-c387-488e-9175-fbc068279c73\" (UID: \"3b30b1ec-c387-488e-9175-fbc068279c73\") " Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.623186 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b30b1ec-c387-488e-9175-fbc068279c73-kube-api-access-www9z" (OuterVolumeSpecName: "kube-api-access-www9z") pod "3b30b1ec-c387-488e-9175-fbc068279c73" (UID: "3b30b1ec-c387-488e-9175-fbc068279c73"). InnerVolumeSpecName "kube-api-access-www9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.645408 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-config" (OuterVolumeSpecName: "config") pod "3b30b1ec-c387-488e-9175-fbc068279c73" (UID: "3b30b1ec-c387-488e-9175-fbc068279c73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.651343 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b30b1ec-c387-488e-9175-fbc068279c73" (UID: "3b30b1ec-c387-488e-9175-fbc068279c73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.718089 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.718127 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3b30b1ec-c387-488e-9175-fbc068279c73-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:43 crc kubenswrapper[5028]: I1123 08:43:43.718141 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-www9z\" (UniqueName: \"kubernetes.io/projected/3b30b1ec-c387-488e-9175-fbc068279c73-kube-api-access-www9z\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.189712 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pfnxd" event={"ID":"3b30b1ec-c387-488e-9175-fbc068279c73","Type":"ContainerDied","Data":"ff6030a2245dd4482ea2bf715212566963f7397130b26d0da061963622ed95e0"} Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.190318 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff6030a2245dd4482ea2bf715212566963f7397130b26d0da061963622ed95e0" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.189840 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pfnxd" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.437679 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-854948bf47-lwvrw"] Nov 23 08:43:44 crc kubenswrapper[5028]: E1123 08:43:44.438101 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b30b1ec-c387-488e-9175-fbc068279c73" containerName="neutron-db-sync" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.438113 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b30b1ec-c387-488e-9175-fbc068279c73" containerName="neutron-db-sync" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.438352 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b30b1ec-c387-488e-9175-fbc068279c73" containerName="neutron-db-sync" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.439886 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.462813 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854948bf47-lwvrw"] Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.539708 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvkf\" (UniqueName: \"kubernetes.io/projected/454d3ec2-4984-445c-bf59-e6a459c69a2b-kube-api-access-vcvkf\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.539789 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-config\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.539831 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-nb\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.539966 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-dns-svc\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.540065 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-sb\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.575277 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c89b65897-cxm9k"] Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.577554 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.582166 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.582422 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5zq5m" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.582665 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.602397 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c89b65897-cxm9k"] Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.644742 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvkf\" (UniqueName: \"kubernetes.io/projected/454d3ec2-4984-445c-bf59-e6a459c69a2b-kube-api-access-vcvkf\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.644795 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-config\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.644826 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-nb\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.644916 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-dns-svc\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.644994 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-sb\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.646325 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-sb\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.646376 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-nb\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.647057 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-dns-svc\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.647169 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-config\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.678595 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvkf\" (UniqueName: \"kubernetes.io/projected/454d3ec2-4984-445c-bf59-e6a459c69a2b-kube-api-access-vcvkf\") pod \"dnsmasq-dns-854948bf47-lwvrw\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.748518 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-config\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.748634 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-httpd-config\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.748672 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-combined-ca-bundle\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.748774 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2vh\" (UniqueName: \"kubernetes.io/projected/a25194a0-0614-4490-bb9a-c184114469f2-kube-api-access-2p2vh\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.804455 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.850724 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2vh\" (UniqueName: \"kubernetes.io/projected/a25194a0-0614-4490-bb9a-c184114469f2-kube-api-access-2p2vh\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.851366 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-config\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.851408 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-combined-ca-bundle\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.851427 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-httpd-config\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.855621 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-httpd-config\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.859806 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-combined-ca-bundle\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.860603 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a25194a0-0614-4490-bb9a-c184114469f2-config\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.886805 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2vh\" (UniqueName: \"kubernetes.io/projected/a25194a0-0614-4490-bb9a-c184114469f2-kube-api-access-2p2vh\") pod \"neutron-c89b65897-cxm9k\" (UID: \"a25194a0-0614-4490-bb9a-c184114469f2\") " pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:44 crc kubenswrapper[5028]: I1123 08:43:44.907568 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:45 crc kubenswrapper[5028]: I1123 08:43:45.524600 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854948bf47-lwvrw"] Nov 23 08:43:45 crc kubenswrapper[5028]: I1123 08:43:45.732609 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c89b65897-cxm9k"] Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.219152 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c89b65897-cxm9k" event={"ID":"a25194a0-0614-4490-bb9a-c184114469f2","Type":"ContainerStarted","Data":"8354aa5afb37fda766c63255d3410d1a253d4419986f096a98ab67c076ce62ea"} Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.219635 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c89b65897-cxm9k" event={"ID":"a25194a0-0614-4490-bb9a-c184114469f2","Type":"ContainerStarted","Data":"2e6d47a6c6009b6f7330a997af02168084f7bfeec87ddc6303997ed862f20c3b"} Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.219659 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.219672 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c89b65897-cxm9k" event={"ID":"a25194a0-0614-4490-bb9a-c184114469f2","Type":"ContainerStarted","Data":"c4da811298ec6de77b31997bcc0df231dd5f030d3415e711ad6da29909a34c23"} Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.222236 5028 generic.go:334] "Generic (PLEG): container finished" podID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerID="18d51355d61a250b75ba7863d261e5417153d726f6313817af6b199111efd9e0" exitCode=0 Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.222276 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" event={"ID":"454d3ec2-4984-445c-bf59-e6a459c69a2b","Type":"ContainerDied","Data":"18d51355d61a250b75ba7863d261e5417153d726f6313817af6b199111efd9e0"} Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.222311 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" event={"ID":"454d3ec2-4984-445c-bf59-e6a459c69a2b","Type":"ContainerStarted","Data":"3602d3bcb6041cebf79ff5c53f47f16f72c70db3910270db4a1f5791e24ff0da"} Nov 23 08:43:46 crc kubenswrapper[5028]: I1123 08:43:46.243160 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c89b65897-cxm9k" podStartSLOduration=2.243136413 podStartE2EDuration="2.243136413s" podCreationTimestamp="2025-11-23 08:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:46.240342185 +0000 UTC m=+6809.937746974" watchObservedRunningTime="2025-11-23 08:43:46.243136413 +0000 UTC m=+6809.940541192" Nov 23 08:43:47 crc kubenswrapper[5028]: I1123 08:43:47.066911 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:43:47 crc kubenswrapper[5028]: E1123 08:43:47.067565 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:43:47 crc kubenswrapper[5028]: I1123 08:43:47.233548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" event={"ID":"454d3ec2-4984-445c-bf59-e6a459c69a2b","Type":"ContainerStarted","Data":"53083239f44d5eb90b39b0ca1de9f29ae3a1fdc7b222265a299194b3b6fed5df"} Nov 23 08:43:47 crc kubenswrapper[5028]: I1123 08:43:47.234189 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:47 crc kubenswrapper[5028]: I1123 08:43:47.259909 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" podStartSLOduration=3.25988128 podStartE2EDuration="3.25988128s" podCreationTimestamp="2025-11-23 08:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:43:47.259357648 +0000 UTC m=+6810.956762427" watchObservedRunningTime="2025-11-23 08:43:47.25988128 +0000 UTC m=+6810.957286059" Nov 23 08:43:54 crc kubenswrapper[5028]: I1123 08:43:54.807166 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:43:54 crc kubenswrapper[5028]: I1123 08:43:54.891560 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9497d7c6f-wbhg4"] Nov 23 08:43:54 crc kubenswrapper[5028]: I1123 08:43:54.892000 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerName="dnsmasq-dns" containerID="cri-o://1971a3d9447b870a53769834b7d2d083779994565ad3e938d0d1cdb62d593a14" gracePeriod=10 Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.335893 5028 generic.go:334] "Generic (PLEG): container finished" podID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerID="1971a3d9447b870a53769834b7d2d083779994565ad3e938d0d1cdb62d593a14" exitCode=0 Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.336029 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" event={"ID":"1e02e1ff-61b6-4890-aa5e-65ba5696b271","Type":"ContainerDied","Data":"1971a3d9447b870a53769834b7d2d083779994565ad3e938d0d1cdb62d593a14"} Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.454501 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.521052 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-459qq\" (UniqueName: \"kubernetes.io/projected/1e02e1ff-61b6-4890-aa5e-65ba5696b271-kube-api-access-459qq\") pod \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.521216 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-dns-svc\") pod \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.521373 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-sb\") pod \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.521401 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-nb\") pod \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.521544 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-config\") pod \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\" (UID: \"1e02e1ff-61b6-4890-aa5e-65ba5696b271\") " Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.528863 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e02e1ff-61b6-4890-aa5e-65ba5696b271-kube-api-access-459qq" (OuterVolumeSpecName: "kube-api-access-459qq") pod "1e02e1ff-61b6-4890-aa5e-65ba5696b271" (UID: "1e02e1ff-61b6-4890-aa5e-65ba5696b271"). InnerVolumeSpecName "kube-api-access-459qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.568286 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e02e1ff-61b6-4890-aa5e-65ba5696b271" (UID: "1e02e1ff-61b6-4890-aa5e-65ba5696b271"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.569447 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e02e1ff-61b6-4890-aa5e-65ba5696b271" (UID: "1e02e1ff-61b6-4890-aa5e-65ba5696b271"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.574645 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e02e1ff-61b6-4890-aa5e-65ba5696b271" (UID: "1e02e1ff-61b6-4890-aa5e-65ba5696b271"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.577771 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-config" (OuterVolumeSpecName: "config") pod "1e02e1ff-61b6-4890-aa5e-65ba5696b271" (UID: "1e02e1ff-61b6-4890-aa5e-65ba5696b271"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.624248 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.624824 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.624841 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.624859 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-459qq\" (UniqueName: \"kubernetes.io/projected/1e02e1ff-61b6-4890-aa5e-65ba5696b271-kube-api-access-459qq\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:55 crc kubenswrapper[5028]: I1123 08:43:55.624877 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e02e1ff-61b6-4890-aa5e-65ba5696b271-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:43:56 crc kubenswrapper[5028]: I1123 08:43:56.354003 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" event={"ID":"1e02e1ff-61b6-4890-aa5e-65ba5696b271","Type":"ContainerDied","Data":"b6396df7067f1ffad3589416d1c78e3b379ced9b200566156b2641258ca1ceac"} Nov 23 08:43:56 crc kubenswrapper[5028]: I1123 08:43:56.354078 5028 scope.go:117] "RemoveContainer" containerID="1971a3d9447b870a53769834b7d2d083779994565ad3e938d0d1cdb62d593a14" Nov 23 08:43:56 crc kubenswrapper[5028]: I1123 08:43:56.354101 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9497d7c6f-wbhg4" Nov 23 08:43:56 crc kubenswrapper[5028]: I1123 08:43:56.386768 5028 scope.go:117] "RemoveContainer" containerID="b0382a93dc7b5337bd821db1219045b35681e9b07b42e28380c5003c53f7581f" Nov 23 08:43:56 crc kubenswrapper[5028]: I1123 08:43:56.405570 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9497d7c6f-wbhg4"] Nov 23 08:43:56 crc kubenswrapper[5028]: I1123 08:43:56.415751 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9497d7c6f-wbhg4"] Nov 23 08:43:57 crc kubenswrapper[5028]: I1123 08:43:57.066435 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" path="/var/lib/kubelet/pods/1e02e1ff-61b6-4890-aa5e-65ba5696b271/volumes" Nov 23 08:44:00 crc kubenswrapper[5028]: I1123 08:44:00.053639 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:44:00 crc kubenswrapper[5028]: E1123 08:44:00.054592 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:44:12 crc kubenswrapper[5028]: I1123 08:44:12.054152 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:44:12 crc kubenswrapper[5028]: E1123 08:44:12.055563 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:44:14 crc kubenswrapper[5028]: I1123 08:44:14.929318 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c89b65897-cxm9k" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.864864 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fk4lw"] Nov 23 08:44:22 crc kubenswrapper[5028]: E1123 08:44:22.865894 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerName="init" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.865909 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerName="init" Nov 23 08:44:22 crc kubenswrapper[5028]: E1123 08:44:22.865926 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerName="dnsmasq-dns" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.865934 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerName="dnsmasq-dns" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.866139 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e02e1ff-61b6-4890-aa5e-65ba5696b271" containerName="dnsmasq-dns" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.886262 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.923101 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fk4lw"] Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.956602 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzh9k\" (UniqueName: \"kubernetes.io/projected/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-kube-api-access-lzh9k\") pod \"glance-db-create-fk4lw\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.956724 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-operator-scripts\") pod \"glance-db-create-fk4lw\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.968356 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9363-account-create-7cl9l"] Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.969794 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.974113 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 23 08:44:22 crc kubenswrapper[5028]: I1123 08:44:22.982003 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9363-account-create-7cl9l"] Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.056041 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:44:23 crc kubenswrapper[5028]: E1123 08:44:23.056407 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.058573 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cfcb3-3bbc-4029-81ec-db084de0cd16-operator-scripts\") pod \"glance-9363-account-create-7cl9l\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.058631 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzh9k\" (UniqueName: \"kubernetes.io/projected/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-kube-api-access-lzh9k\") pod \"glance-db-create-fk4lw\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.058731 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzd4\" (UniqueName: \"kubernetes.io/projected/670cfcb3-3bbc-4029-81ec-db084de0cd16-kube-api-access-4zzd4\") pod \"glance-9363-account-create-7cl9l\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.058781 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-operator-scripts\") pod \"glance-db-create-fk4lw\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.059656 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-operator-scripts\") pod \"glance-db-create-fk4lw\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.085033 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzh9k\" (UniqueName: \"kubernetes.io/projected/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-kube-api-access-lzh9k\") pod \"glance-db-create-fk4lw\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.160074 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cfcb3-3bbc-4029-81ec-db084de0cd16-operator-scripts\") pod \"glance-9363-account-create-7cl9l\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.160175 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zzd4\" (UniqueName: \"kubernetes.io/projected/670cfcb3-3bbc-4029-81ec-db084de0cd16-kube-api-access-4zzd4\") pod \"glance-9363-account-create-7cl9l\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.162411 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cfcb3-3bbc-4029-81ec-db084de0cd16-operator-scripts\") pod \"glance-9363-account-create-7cl9l\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.203186 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zzd4\" (UniqueName: \"kubernetes.io/projected/670cfcb3-3bbc-4029-81ec-db084de0cd16-kube-api-access-4zzd4\") pod \"glance-9363-account-create-7cl9l\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.222696 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.296570 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.679993 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fk4lw"] Nov 23 08:44:23 crc kubenswrapper[5028]: I1123 08:44:23.863602 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9363-account-create-7cl9l"] Nov 23 08:44:24 crc kubenswrapper[5028]: I1123 08:44:24.660450 5028 generic.go:334] "Generic (PLEG): container finished" podID="5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" containerID="2bfd9e29ec3259e869d9ce6f516fb3c4f877cbfea0bd9c2a98dd1d1d5f86b8d3" exitCode=0 Nov 23 08:44:24 crc kubenswrapper[5028]: I1123 08:44:24.660569 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fk4lw" event={"ID":"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c","Type":"ContainerDied","Data":"2bfd9e29ec3259e869d9ce6f516fb3c4f877cbfea0bd9c2a98dd1d1d5f86b8d3"} Nov 23 08:44:24 crc kubenswrapper[5028]: I1123 08:44:24.660629 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fk4lw" event={"ID":"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c","Type":"ContainerStarted","Data":"63d2cae345d66838fecc882108d0dd84355f10d5f150fff4657d5c207b8f08d8"} Nov 23 08:44:24 crc kubenswrapper[5028]: I1123 08:44:24.664833 5028 generic.go:334] "Generic (PLEG): container finished" podID="670cfcb3-3bbc-4029-81ec-db084de0cd16" containerID="8bac3942def42e62826f18c91659afb354cf91f434d4677f5d940593d06af59b" exitCode=0 Nov 23 08:44:24 crc kubenswrapper[5028]: I1123 08:44:24.664902 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9363-account-create-7cl9l" event={"ID":"670cfcb3-3bbc-4029-81ec-db084de0cd16","Type":"ContainerDied","Data":"8bac3942def42e62826f18c91659afb354cf91f434d4677f5d940593d06af59b"} Nov 23 08:44:24 crc kubenswrapper[5028]: I1123 08:44:24.664941 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9363-account-create-7cl9l" event={"ID":"670cfcb3-3bbc-4029-81ec-db084de0cd16","Type":"ContainerStarted","Data":"a239d4df40a8144a74c98772871d0f98341857d2706b013b0b5afd75857b8092"} Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.142302 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.153450 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.225620 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-operator-scripts\") pod \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.225715 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzh9k\" (UniqueName: \"kubernetes.io/projected/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-kube-api-access-lzh9k\") pod \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\" (UID: \"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c\") " Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.225828 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zzd4\" (UniqueName: \"kubernetes.io/projected/670cfcb3-3bbc-4029-81ec-db084de0cd16-kube-api-access-4zzd4\") pod \"670cfcb3-3bbc-4029-81ec-db084de0cd16\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.225979 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cfcb3-3bbc-4029-81ec-db084de0cd16-operator-scripts\") pod \"670cfcb3-3bbc-4029-81ec-db084de0cd16\" (UID: \"670cfcb3-3bbc-4029-81ec-db084de0cd16\") " Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.227146 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/670cfcb3-3bbc-4029-81ec-db084de0cd16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "670cfcb3-3bbc-4029-81ec-db084de0cd16" (UID: "670cfcb3-3bbc-4029-81ec-db084de0cd16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.227347 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" (UID: "5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.236283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-kube-api-access-lzh9k" (OuterVolumeSpecName: "kube-api-access-lzh9k") pod "5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" (UID: "5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c"). InnerVolumeSpecName "kube-api-access-lzh9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.242143 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/670cfcb3-3bbc-4029-81ec-db084de0cd16-kube-api-access-4zzd4" (OuterVolumeSpecName: "kube-api-access-4zzd4") pod "670cfcb3-3bbc-4029-81ec-db084de0cd16" (UID: "670cfcb3-3bbc-4029-81ec-db084de0cd16"). InnerVolumeSpecName "kube-api-access-4zzd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.328719 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zzd4\" (UniqueName: \"kubernetes.io/projected/670cfcb3-3bbc-4029-81ec-db084de0cd16-kube-api-access-4zzd4\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.328756 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/670cfcb3-3bbc-4029-81ec-db084de0cd16-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.328767 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.328779 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzh9k\" (UniqueName: \"kubernetes.io/projected/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c-kube-api-access-lzh9k\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.696239 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9363-account-create-7cl9l" event={"ID":"670cfcb3-3bbc-4029-81ec-db084de0cd16","Type":"ContainerDied","Data":"a239d4df40a8144a74c98772871d0f98341857d2706b013b0b5afd75857b8092"} Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.696315 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a239d4df40a8144a74c98772871d0f98341857d2706b013b0b5afd75857b8092" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.696332 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9363-account-create-7cl9l" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.699529 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fk4lw" event={"ID":"5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c","Type":"ContainerDied","Data":"63d2cae345d66838fecc882108d0dd84355f10d5f150fff4657d5c207b8f08d8"} Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.699602 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fk4lw" Nov 23 08:44:26 crc kubenswrapper[5028]: I1123 08:44:26.699612 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d2cae345d66838fecc882108d0dd84355f10d5f150fff4657d5c207b8f08d8" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.188870 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-79hbz"] Nov 23 08:44:28 crc kubenswrapper[5028]: E1123 08:44:28.189884 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="670cfcb3-3bbc-4029-81ec-db084de0cd16" containerName="mariadb-account-create" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.189906 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="670cfcb3-3bbc-4029-81ec-db084de0cd16" containerName="mariadb-account-create" Nov 23 08:44:28 crc kubenswrapper[5028]: E1123 08:44:28.189919 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" containerName="mariadb-database-create" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.189927 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" containerName="mariadb-database-create" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.190179 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="670cfcb3-3bbc-4029-81ec-db084de0cd16" containerName="mariadb-account-create" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.190228 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" containerName="mariadb-database-create" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.191496 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.196510 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.196554 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xqrb7" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.224808 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-79hbz"] Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.275166 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-combined-ca-bundle\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.275279 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-config-data\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.275304 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-db-sync-config-data\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.275350 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7lr\" (UniqueName: \"kubernetes.io/projected/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-kube-api-access-kx7lr\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.376983 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-config-data\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.377051 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-db-sync-config-data\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.377120 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7lr\" (UniqueName: \"kubernetes.io/projected/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-kube-api-access-kx7lr\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.377195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-combined-ca-bundle\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.384835 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-db-sync-config-data\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.385071 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-config-data\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.386507 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-combined-ca-bundle\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.400180 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7lr\" (UniqueName: \"kubernetes.io/projected/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-kube-api-access-kx7lr\") pod \"glance-db-sync-79hbz\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:28 crc kubenswrapper[5028]: I1123 08:44:28.554244 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:29 crc kubenswrapper[5028]: I1123 08:44:29.189837 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-79hbz"] Nov 23 08:44:29 crc kubenswrapper[5028]: I1123 08:44:29.735395 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-79hbz" event={"ID":"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1","Type":"ContainerStarted","Data":"c16378ea28e52e86dc4780d1639bcf6f7efe63cb81fe94e7909746c7c3ab4a96"} Nov 23 08:44:35 crc kubenswrapper[5028]: I1123 08:44:35.053713 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:44:35 crc kubenswrapper[5028]: E1123 08:44:35.055866 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:44:45 crc kubenswrapper[5028]: I1123 08:44:45.923094 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-79hbz" event={"ID":"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1","Type":"ContainerStarted","Data":"04027337ddb311f3a01680b248e2c01061df27aaa9aba27f34acf1a012e2d182"} Nov 23 08:44:45 crc kubenswrapper[5028]: I1123 08:44:45.952680 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-79hbz" podStartSLOduration=2.449640313 podStartE2EDuration="17.952652831s" podCreationTimestamp="2025-11-23 08:44:28 +0000 UTC" firstStartedPulling="2025-11-23 08:44:29.196450785 +0000 UTC m=+6852.893855564" lastFinishedPulling="2025-11-23 08:44:44.699463303 +0000 UTC m=+6868.396868082" observedRunningTime="2025-11-23 08:44:45.94163167 +0000 UTC m=+6869.639036469" watchObservedRunningTime="2025-11-23 08:44:45.952652831 +0000 UTC m=+6869.650057610" Nov 23 08:44:48 crc kubenswrapper[5028]: I1123 08:44:48.970415 5028 generic.go:334] "Generic (PLEG): container finished" podID="5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" containerID="04027337ddb311f3a01680b248e2c01061df27aaa9aba27f34acf1a012e2d182" exitCode=0 Nov 23 08:44:48 crc kubenswrapper[5028]: I1123 08:44:48.970563 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-79hbz" event={"ID":"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1","Type":"ContainerDied","Data":"04027337ddb311f3a01680b248e2c01061df27aaa9aba27f34acf1a012e2d182"} Nov 23 08:44:49 crc kubenswrapper[5028]: I1123 08:44:49.053478 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:44:49 crc kubenswrapper[5028]: E1123 08:44:49.054282 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.421307 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.549851 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7lr\" (UniqueName: \"kubernetes.io/projected/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-kube-api-access-kx7lr\") pod \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.549939 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-config-data\") pod \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.550026 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-combined-ca-bundle\") pod \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.550244 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-db-sync-config-data\") pod \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\" (UID: \"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1\") " Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.560283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-kube-api-access-kx7lr" (OuterVolumeSpecName: "kube-api-access-kx7lr") pod "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" (UID: "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1"). InnerVolumeSpecName "kube-api-access-kx7lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.561190 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" (UID: "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.582619 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" (UID: "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.607420 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-config-data" (OuterVolumeSpecName: "config-data") pod "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" (UID: "5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.653439 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7lr\" (UniqueName: \"kubernetes.io/projected/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-kube-api-access-kx7lr\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.653483 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.653498 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.653525 5028 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.997874 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-79hbz" event={"ID":"5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1","Type":"ContainerDied","Data":"c16378ea28e52e86dc4780d1639bcf6f7efe63cb81fe94e7909746c7c3ab4a96"} Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.998348 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16378ea28e52e86dc4780d1639bcf6f7efe63cb81fe94e7909746c7c3ab4a96" Nov 23 08:44:50 crc kubenswrapper[5028]: I1123 08:44:50.998049 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-79hbz" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.358301 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:51 crc kubenswrapper[5028]: E1123 08:44:51.358801 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" containerName="glance-db-sync" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.358818 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" containerName="glance-db-sync" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.359083 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" containerName="glance-db-sync" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.362737 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.372488 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.375465 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.380472 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xqrb7" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.380479 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.405508 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471450 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-ceph\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471536 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-logs\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471568 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471600 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79x6p\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-kube-api-access-79x6p\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471681 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-scripts\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471710 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.471775 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-config-data\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.507294 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84649649-x9ktp"] Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.510727 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.529911 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84649649-x9ktp"] Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583322 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-config-data\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583402 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-ceph\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583430 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-logs\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583455 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583479 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79x6p\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-kube-api-access-79x6p\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583539 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-scripts\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.583562 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.584068 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.584220 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-logs\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.594805 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-ceph\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.601996 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-scripts\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.603245 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-config-data\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.609712 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.611507 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79x6p\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-kube-api-access-79x6p\") pod \"glance-default-external-api-0\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.621527 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.623468 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.627905 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.631162 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.685334 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-dns-svc\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.685814 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lj8x\" (UniqueName: \"kubernetes.io/projected/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-kube-api-access-8lj8x\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.685877 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.685903 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.686022 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-config\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.705641 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790345 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790427 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbkm7\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-kube-api-access-lbkm7\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790462 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-config\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790531 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790569 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-dns-svc\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790596 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790636 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lj8x\" (UniqueName: \"kubernetes.io/projected/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-kube-api-access-8lj8x\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790665 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790700 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790740 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790762 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.790798 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.792040 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-config\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.792050 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.792872 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-dns-svc\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.793275 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.816242 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lj8x\" (UniqueName: \"kubernetes.io/projected/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-kube-api-access-8lj8x\") pod \"dnsmasq-dns-7d84649649-x9ktp\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.846069 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893482 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893582 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbkm7\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-kube-api-access-lbkm7\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893652 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893693 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893734 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893770 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.893808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.894327 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.894680 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-logs\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.902595 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.908272 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-ceph\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.908245 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.908445 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:51 crc kubenswrapper[5028]: I1123 08:44:51.918138 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbkm7\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-kube-api-access-lbkm7\") pod \"glance-default-internal-api-0\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:52 crc kubenswrapper[5028]: I1123 08:44:52.009484 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:44:52 crc kubenswrapper[5028]: I1123 08:44:52.431851 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:52 crc kubenswrapper[5028]: I1123 08:44:52.448893 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84649649-x9ktp"] Nov 23 08:44:52 crc kubenswrapper[5028]: I1123 08:44:52.709925 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:52 crc kubenswrapper[5028]: I1123 08:44:52.814194 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:53 crc kubenswrapper[5028]: I1123 08:44:53.023316 5028 generic.go:334] "Generic (PLEG): container finished" podID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerID="4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d" exitCode=0 Nov 23 08:44:53 crc kubenswrapper[5028]: I1123 08:44:53.023445 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" event={"ID":"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe","Type":"ContainerDied","Data":"4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d"} Nov 23 08:44:53 crc kubenswrapper[5028]: I1123 08:44:53.023503 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" event={"ID":"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe","Type":"ContainerStarted","Data":"90cd28d31c60ccafbd10f95e98431a4a3cef886ed62a109f336f905d5100af02"} Nov 23 08:44:53 crc kubenswrapper[5028]: I1123 08:44:53.037167 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b","Type":"ContainerStarted","Data":"29a0a06820a92697840af81f475cf0448e2167898f4d918dd75f34476924fcf1"} Nov 23 08:44:53 crc kubenswrapper[5028]: I1123 08:44:53.076084 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dfe5a35-1c44-4d14-858a-0885d0bdfa62","Type":"ContainerStarted","Data":"c5271c7b73888cc3f90b46bd5969a48b815ba8c959a476f2c6b51751275f096d"} Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.078964 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" event={"ID":"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe","Type":"ContainerStarted","Data":"762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a"} Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.079438 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.093984 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b","Type":"ContainerStarted","Data":"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3"} Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.094047 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b","Type":"ContainerStarted","Data":"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1"} Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.094203 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-log" containerID="cri-o://103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1" gracePeriod=30 Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.094336 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-httpd" containerID="cri-o://a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3" gracePeriod=30 Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.102324 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dfe5a35-1c44-4d14-858a-0885d0bdfa62","Type":"ContainerStarted","Data":"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42"} Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.102378 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dfe5a35-1c44-4d14-858a-0885d0bdfa62","Type":"ContainerStarted","Data":"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a"} Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.133197 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" podStartSLOduration=3.133177003 podStartE2EDuration="3.133177003s" podCreationTimestamp="2025-11-23 08:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:54.103687106 +0000 UTC m=+6877.801091885" watchObservedRunningTime="2025-11-23 08:44:54.133177003 +0000 UTC m=+6877.830581782" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.143906 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.143877676 podStartE2EDuration="3.143877676s" podCreationTimestamp="2025-11-23 08:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:54.130853355 +0000 UTC m=+6877.828258144" watchObservedRunningTime="2025-11-23 08:44:54.143877676 +0000 UTC m=+6877.841282445" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.173205 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.173174608 podStartE2EDuration="3.173174608s" podCreationTimestamp="2025-11-23 08:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:54.164607577 +0000 UTC m=+6877.862012356" watchObservedRunningTime="2025-11-23 08:44:54.173174608 +0000 UTC m=+6877.870579377" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.724578 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.815916 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.879749 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-config-data\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.879850 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-scripts\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.879993 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-logs\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.880034 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-httpd-run\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.880104 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79x6p\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-kube-api-access-79x6p\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.880151 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-ceph\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.880182 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-combined-ca-bundle\") pod \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\" (UID: \"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b\") " Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.880700 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-logs" (OuterVolumeSpecName: "logs") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.880936 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.887400 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-scripts" (OuterVolumeSpecName: "scripts") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.887681 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-ceph" (OuterVolumeSpecName: "ceph") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.887857 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-kube-api-access-79x6p" (OuterVolumeSpecName: "kube-api-access-79x6p") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "kube-api-access-79x6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.914652 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.940135 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-config-data" (OuterVolumeSpecName: "config-data") pod "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" (UID: "5b3754e2-36e1-410b-b307-aa0c4cbe7f5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983024 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983106 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983122 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79x6p\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-kube-api-access-79x6p\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983133 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983142 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983154 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:54 crc kubenswrapper[5028]: I1123 08:44:54.983163 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.117838 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.117877 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b","Type":"ContainerDied","Data":"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3"} Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.117972 5028 scope.go:117] "RemoveContainer" containerID="a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.117739 5028 generic.go:334] "Generic (PLEG): container finished" podID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerID="a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3" exitCode=143 Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.118961 5028 generic.go:334] "Generic (PLEG): container finished" podID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerID="103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1" exitCode=143 Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.119126 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b","Type":"ContainerDied","Data":"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1"} Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.119156 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5b3754e2-36e1-410b-b307-aa0c4cbe7f5b","Type":"ContainerDied","Data":"29a0a06820a92697840af81f475cf0448e2167898f4d918dd75f34476924fcf1"} Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.145770 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.148461 5028 scope.go:117] "RemoveContainer" containerID="103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.162523 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.171356 5028 scope.go:117] "RemoveContainer" containerID="a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3" Nov 23 08:44:55 crc kubenswrapper[5028]: E1123 08:44:55.172091 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3\": container with ID starting with a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3 not found: ID does not exist" containerID="a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.172188 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3"} err="failed to get container status \"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3\": rpc error: code = NotFound desc = could not find container \"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3\": container with ID starting with a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3 not found: ID does not exist" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.172272 5028 scope.go:117] "RemoveContainer" containerID="103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1" Nov 23 08:44:55 crc kubenswrapper[5028]: E1123 08:44:55.172895 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1\": container with ID starting with 103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1 not found: ID does not exist" containerID="103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.173006 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1"} err="failed to get container status \"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1\": rpc error: code = NotFound desc = could not find container \"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1\": container with ID starting with 103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1 not found: ID does not exist" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.173088 5028 scope.go:117] "RemoveContainer" containerID="a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.173499 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3"} err="failed to get container status \"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3\": rpc error: code = NotFound desc = could not find container \"a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3\": container with ID starting with a355177a810c0bfa14fa514a16d4b33d3acfecf3a8696721b3042bc90bcb1ca3 not found: ID does not exist" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.173521 5028 scope.go:117] "RemoveContainer" containerID="103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.174179 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1"} err="failed to get container status \"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1\": rpc error: code = NotFound desc = could not find container \"103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1\": container with ID starting with 103dde0d3732ad86340c04bc64fb38f9637313d0c8a24b9e2e729ed0f0628cc1 not found: ID does not exist" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.185179 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:55 crc kubenswrapper[5028]: E1123 08:44:55.185681 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-httpd" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.185699 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-httpd" Nov 23 08:44:55 crc kubenswrapper[5028]: E1123 08:44:55.185711 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-log" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.185717 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-log" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.185871 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-log" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.185899 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" containerName="glance-httpd" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.186963 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.190053 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.212386 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289462 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289564 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289666 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289717 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5zw\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-kube-api-access-6k5zw\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289779 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289874 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.289924 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-ceph\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.391666 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.392018 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.392166 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.392741 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-logs\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.392856 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k5zw\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-kube-api-access-6k5zw\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.393314 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.393442 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.393555 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-ceph\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.394155 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.396232 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-scripts\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.397607 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.397687 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-ceph\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.403160 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-config-data\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.417823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k5zw\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-kube-api-access-6k5zw\") pod \"glance-default-external-api-0\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " pod="openstack/glance-default-external-api-0" Nov 23 08:44:55 crc kubenswrapper[5028]: I1123 08:44:55.513040 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:44:56 crc kubenswrapper[5028]: I1123 08:44:56.099587 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:44:56 crc kubenswrapper[5028]: I1123 08:44:56.138450 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfb848f2-4688-4604-a536-8f8dbacd90b2","Type":"ContainerStarted","Data":"b6a100f5229b003d32229085fe35817e4434f27c50456f16426316d8274f4cfa"} Nov 23 08:44:56 crc kubenswrapper[5028]: I1123 08:44:56.140114 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-log" containerID="cri-o://882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a" gracePeriod=30 Nov 23 08:44:56 crc kubenswrapper[5028]: I1123 08:44:56.140193 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-httpd" containerID="cri-o://00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42" gracePeriod=30 Nov 23 08:44:56 crc kubenswrapper[5028]: I1123 08:44:56.861363 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.025496 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-ceph\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.025568 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-combined-ca-bundle\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.025617 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-config-data\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.025698 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-httpd-run\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.025767 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-logs\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.025788 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-scripts\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.026004 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbkm7\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-kube-api-access-lbkm7\") pod \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\" (UID: \"0dfe5a35-1c44-4d14-858a-0885d0bdfa62\") " Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.026226 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-logs" (OuterVolumeSpecName: "logs") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.026263 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.026676 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.026698 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.031527 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-scripts" (OuterVolumeSpecName: "scripts") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.032031 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-kube-api-access-lbkm7" (OuterVolumeSpecName: "kube-api-access-lbkm7") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "kube-api-access-lbkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.032804 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-ceph" (OuterVolumeSpecName: "ceph") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.080089 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.086839 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b3754e2-36e1-410b-b307-aa0c4cbe7f5b" path="/var/lib/kubelet/pods/5b3754e2-36e1-410b-b307-aa0c4cbe7f5b/volumes" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.109114 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-config-data" (OuterVolumeSpecName: "config-data") pod "0dfe5a35-1c44-4d14-858a-0885d0bdfa62" (UID: "0dfe5a35-1c44-4d14-858a-0885d0bdfa62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.129842 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.129888 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.129904 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.129919 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.129933 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbkm7\" (UniqueName: \"kubernetes.io/projected/0dfe5a35-1c44-4d14-858a-0885d0bdfa62-kube-api-access-lbkm7\") on node \"crc\" DevicePath \"\"" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.164076 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfb848f2-4688-4604-a536-8f8dbacd90b2","Type":"ContainerStarted","Data":"b2f944fbeea1c52002d31c10e03f66511e343ca680368a88e0dc2c59ffe196e8"} Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174635 5028 generic.go:334] "Generic (PLEG): container finished" podID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerID="00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42" exitCode=0 Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174693 5028 generic.go:334] "Generic (PLEG): container finished" podID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerID="882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a" exitCode=143 Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174694 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dfe5a35-1c44-4d14-858a-0885d0bdfa62","Type":"ContainerDied","Data":"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42"} Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174819 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dfe5a35-1c44-4d14-858a-0885d0bdfa62","Type":"ContainerDied","Data":"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a"} Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174845 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0dfe5a35-1c44-4d14-858a-0885d0bdfa62","Type":"ContainerDied","Data":"c5271c7b73888cc3f90b46bd5969a48b815ba8c959a476f2c6b51751275f096d"} Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.174878 5028 scope.go:117] "RemoveContainer" containerID="00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.229838 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.231685 5028 scope.go:117] "RemoveContainer" containerID="882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.298307 5028 scope.go:117] "RemoveContainer" containerID="00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42" Nov 23 08:44:57 crc kubenswrapper[5028]: E1123 08:44:57.300100 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42\": container with ID starting with 00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42 not found: ID does not exist" containerID="00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.300157 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42"} err="failed to get container status \"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42\": rpc error: code = NotFound desc = could not find container \"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42\": container with ID starting with 00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42 not found: ID does not exist" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.300203 5028 scope.go:117] "RemoveContainer" containerID="882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a" Nov 23 08:44:57 crc kubenswrapper[5028]: E1123 08:44:57.300975 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a\": container with ID starting with 882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a not found: ID does not exist" containerID="882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.301036 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a"} err="failed to get container status \"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a\": rpc error: code = NotFound desc = could not find container \"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a\": container with ID starting with 882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a not found: ID does not exist" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.301064 5028 scope.go:117] "RemoveContainer" containerID="00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.301554 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42"} err="failed to get container status \"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42\": rpc error: code = NotFound desc = could not find container \"00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42\": container with ID starting with 00b0835ceedbcfffb91f580ab4e63bed61dbaf81e2620c474a31f50a8dfc2c42 not found: ID does not exist" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.301601 5028 scope.go:117] "RemoveContainer" containerID="882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.302364 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a"} err="failed to get container status \"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a\": rpc error: code = NotFound desc = could not find container \"882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a\": container with ID starting with 882a1cbd3305ed21aaf93b3f7ed7b5329d044a69e5ddeecfd290454dcab6544a not found: ID does not exist" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.316068 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.327274 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:57 crc kubenswrapper[5028]: E1123 08:44:57.327846 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-httpd" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.327867 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-httpd" Nov 23 08:44:57 crc kubenswrapper[5028]: E1123 08:44:57.327904 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-log" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.327912 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-log" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.328164 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-log" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.328196 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" containerName="glance-httpd" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.329544 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.332644 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.341328 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.440505 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn28m\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-kube-api-access-mn28m\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.440724 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-logs\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.440903 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.441054 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.441265 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.441445 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.441846 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-ceph\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.544177 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn28m\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-kube-api-access-mn28m\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.544677 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-logs\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.544803 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.544935 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.545125 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.545182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-logs\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.545559 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.545576 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.546015 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-ceph\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.553615 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-ceph\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.553859 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.554193 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.556147 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.562156 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn28m\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-kube-api-access-mn28m\") pod \"glance-default-internal-api-0\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:44:57 crc kubenswrapper[5028]: I1123 08:44:57.664930 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:44:58 crc kubenswrapper[5028]: I1123 08:44:58.194812 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfb848f2-4688-4604-a536-8f8dbacd90b2","Type":"ContainerStarted","Data":"1ec8ecd06481fb58c076cba73ae5d15e7ecec9e86447d4e45f6931f14eae8da9"} Nov 23 08:44:58 crc kubenswrapper[5028]: I1123 08:44:58.218161 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.218138133 podStartE2EDuration="3.218138133s" podCreationTimestamp="2025-11-23 08:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:44:58.21677887 +0000 UTC m=+6881.914183649" watchObservedRunningTime="2025-11-23 08:44:58.218138133 +0000 UTC m=+6881.915542902" Nov 23 08:44:58 crc kubenswrapper[5028]: I1123 08:44:58.259888 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:44:59 crc kubenswrapper[5028]: I1123 08:44:59.065714 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfe5a35-1c44-4d14-858a-0885d0bdfa62" path="/var/lib/kubelet/pods/0dfe5a35-1c44-4d14-858a-0885d0bdfa62/volumes" Nov 23 08:44:59 crc kubenswrapper[5028]: I1123 08:44:59.209718 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01","Type":"ContainerStarted","Data":"3b6d89f1fd322d06c31b2709dce2b71789b730e14c896c0efa9f5631fbe5f85b"} Nov 23 08:44:59 crc kubenswrapper[5028]: I1123 08:44:59.209805 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01","Type":"ContainerStarted","Data":"5de6f13d429db5d9eaa9e1386bf16ba93397cb7cb75a26cb8ad84090ba3cb701"} Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.168646 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm"] Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.171283 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.180309 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.180626 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.203009 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm"] Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.225839 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01","Type":"ContainerStarted","Data":"958068f5100bc7cc0870d9bae5de4e4b428025dcecfad5e4d0c9681ea45d3fab"} Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.260208 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.260184983 podStartE2EDuration="3.260184983s" podCreationTimestamp="2025-11-23 08:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:00.248451854 +0000 UTC m=+6883.945856633" watchObservedRunningTime="2025-11-23 08:45:00.260184983 +0000 UTC m=+6883.957589762" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.318962 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf9kk\" (UniqueName: \"kubernetes.io/projected/51bf389f-30a9-4a74-a931-e8a28b61f7f6-kube-api-access-hf9kk\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.319074 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51bf389f-30a9-4a74-a931-e8a28b61f7f6-secret-volume\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.319282 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51bf389f-30a9-4a74-a931-e8a28b61f7f6-config-volume\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.421086 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf9kk\" (UniqueName: \"kubernetes.io/projected/51bf389f-30a9-4a74-a931-e8a28b61f7f6-kube-api-access-hf9kk\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.421181 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51bf389f-30a9-4a74-a931-e8a28b61f7f6-secret-volume\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.421293 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51bf389f-30a9-4a74-a931-e8a28b61f7f6-config-volume\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.422280 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51bf389f-30a9-4a74-a931-e8a28b61f7f6-config-volume\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.430063 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51bf389f-30a9-4a74-a931-e8a28b61f7f6-secret-volume\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.440257 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf9kk\" (UniqueName: \"kubernetes.io/projected/51bf389f-30a9-4a74-a931-e8a28b61f7f6-kube-api-access-hf9kk\") pod \"collect-profiles-29398125-fsnxm\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.499495 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:00 crc kubenswrapper[5028]: I1123 08:45:00.989071 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm"] Nov 23 08:45:01 crc kubenswrapper[5028]: I1123 08:45:01.238143 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" event={"ID":"51bf389f-30a9-4a74-a931-e8a28b61f7f6","Type":"ContainerStarted","Data":"20d04bd511a431b08cc0bdd8197803aa7cfeecede596d6c7f279ebdc69e86030"} Nov 23 08:45:01 crc kubenswrapper[5028]: I1123 08:45:01.238795 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" event={"ID":"51bf389f-30a9-4a74-a931-e8a28b61f7f6","Type":"ContainerStarted","Data":"781973c7def4ba8eced141474c676f7caa807b186fa4345fd20ce00cb3534afb"} Nov 23 08:45:01 crc kubenswrapper[5028]: I1123 08:45:01.266196 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" podStartSLOduration=1.2661715820000001 podStartE2EDuration="1.266171582s" podCreationTimestamp="2025-11-23 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:01.261561569 +0000 UTC m=+6884.958966358" watchObservedRunningTime="2025-11-23 08:45:01.266171582 +0000 UTC m=+6884.963576361" Nov 23 08:45:01 crc kubenswrapper[5028]: I1123 08:45:01.849722 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:45:01 crc kubenswrapper[5028]: I1123 08:45:01.921303 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854948bf47-lwvrw"] Nov 23 08:45:01 crc kubenswrapper[5028]: I1123 08:45:01.922187 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerName="dnsmasq-dns" containerID="cri-o://53083239f44d5eb90b39b0ca1de9f29ae3a1fdc7b222265a299194b3b6fed5df" gracePeriod=10 Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.053258 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:45:02 crc kubenswrapper[5028]: E1123 08:45:02.053521 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.263942 5028 generic.go:334] "Generic (PLEG): container finished" podID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerID="53083239f44d5eb90b39b0ca1de9f29ae3a1fdc7b222265a299194b3b6fed5df" exitCode=0 Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.264043 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" event={"ID":"454d3ec2-4984-445c-bf59-e6a459c69a2b","Type":"ContainerDied","Data":"53083239f44d5eb90b39b0ca1de9f29ae3a1fdc7b222265a299194b3b6fed5df"} Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.265866 5028 generic.go:334] "Generic (PLEG): container finished" podID="51bf389f-30a9-4a74-a931-e8a28b61f7f6" containerID="20d04bd511a431b08cc0bdd8197803aa7cfeecede596d6c7f279ebdc69e86030" exitCode=0 Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.265893 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" event={"ID":"51bf389f-30a9-4a74-a931-e8a28b61f7f6","Type":"ContainerDied","Data":"20d04bd511a431b08cc0bdd8197803aa7cfeecede596d6c7f279ebdc69e86030"} Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.407666 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.581465 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-config\") pod \"454d3ec2-4984-445c-bf59-e6a459c69a2b\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.581941 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-nb\") pod \"454d3ec2-4984-445c-bf59-e6a459c69a2b\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.582081 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcvkf\" (UniqueName: \"kubernetes.io/projected/454d3ec2-4984-445c-bf59-e6a459c69a2b-kube-api-access-vcvkf\") pod \"454d3ec2-4984-445c-bf59-e6a459c69a2b\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.582210 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-dns-svc\") pod \"454d3ec2-4984-445c-bf59-e6a459c69a2b\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.582237 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-sb\") pod \"454d3ec2-4984-445c-bf59-e6a459c69a2b\" (UID: \"454d3ec2-4984-445c-bf59-e6a459c69a2b\") " Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.590833 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454d3ec2-4984-445c-bf59-e6a459c69a2b-kube-api-access-vcvkf" (OuterVolumeSpecName: "kube-api-access-vcvkf") pod "454d3ec2-4984-445c-bf59-e6a459c69a2b" (UID: "454d3ec2-4984-445c-bf59-e6a459c69a2b"). InnerVolumeSpecName "kube-api-access-vcvkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.648163 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "454d3ec2-4984-445c-bf59-e6a459c69a2b" (UID: "454d3ec2-4984-445c-bf59-e6a459c69a2b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.655821 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "454d3ec2-4984-445c-bf59-e6a459c69a2b" (UID: "454d3ec2-4984-445c-bf59-e6a459c69a2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.657384 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-config" (OuterVolumeSpecName: "config") pod "454d3ec2-4984-445c-bf59-e6a459c69a2b" (UID: "454d3ec2-4984-445c-bf59-e6a459c69a2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.676340 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "454d3ec2-4984-445c-bf59-e6a459c69a2b" (UID: "454d3ec2-4984-445c-bf59-e6a459c69a2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.685766 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.686162 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.686265 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcvkf\" (UniqueName: \"kubernetes.io/projected/454d3ec2-4984-445c-bf59-e6a459c69a2b-kube-api-access-vcvkf\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.686348 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:02 crc kubenswrapper[5028]: I1123 08:45:02.686436 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454d3ec2-4984-445c-bf59-e6a459c69a2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.281301 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" event={"ID":"454d3ec2-4984-445c-bf59-e6a459c69a2b","Type":"ContainerDied","Data":"3602d3bcb6041cebf79ff5c53f47f16f72c70db3910270db4a1f5791e24ff0da"} Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.281341 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854948bf47-lwvrw" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.281391 5028 scope.go:117] "RemoveContainer" containerID="53083239f44d5eb90b39b0ca1de9f29ae3a1fdc7b222265a299194b3b6fed5df" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.313433 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854948bf47-lwvrw"] Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.317813 5028 scope.go:117] "RemoveContainer" containerID="18d51355d61a250b75ba7863d261e5417153d726f6313817af6b199111efd9e0" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.322607 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-854948bf47-lwvrw"] Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.582932 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.708569 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf9kk\" (UniqueName: \"kubernetes.io/projected/51bf389f-30a9-4a74-a931-e8a28b61f7f6-kube-api-access-hf9kk\") pod \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.709076 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51bf389f-30a9-4a74-a931-e8a28b61f7f6-secret-volume\") pod \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.709225 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51bf389f-30a9-4a74-a931-e8a28b61f7f6-config-volume\") pod \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\" (UID: \"51bf389f-30a9-4a74-a931-e8a28b61f7f6\") " Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.710472 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51bf389f-30a9-4a74-a931-e8a28b61f7f6-config-volume" (OuterVolumeSpecName: "config-volume") pod "51bf389f-30a9-4a74-a931-e8a28b61f7f6" (UID: "51bf389f-30a9-4a74-a931-e8a28b61f7f6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.713584 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bf389f-30a9-4a74-a931-e8a28b61f7f6-kube-api-access-hf9kk" (OuterVolumeSpecName: "kube-api-access-hf9kk") pod "51bf389f-30a9-4a74-a931-e8a28b61f7f6" (UID: "51bf389f-30a9-4a74-a931-e8a28b61f7f6"). InnerVolumeSpecName "kube-api-access-hf9kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.713620 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51bf389f-30a9-4a74-a931-e8a28b61f7f6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51bf389f-30a9-4a74-a931-e8a28b61f7f6" (UID: "51bf389f-30a9-4a74-a931-e8a28b61f7f6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.811536 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51bf389f-30a9-4a74-a931-e8a28b61f7f6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.811585 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51bf389f-30a9-4a74-a931-e8a28b61f7f6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:03 crc kubenswrapper[5028]: I1123 08:45:03.811597 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf9kk\" (UniqueName: \"kubernetes.io/projected/51bf389f-30a9-4a74-a931-e8a28b61f7f6-kube-api-access-hf9kk\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:04 crc kubenswrapper[5028]: I1123 08:45:04.331430 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" event={"ID":"51bf389f-30a9-4a74-a931-e8a28b61f7f6","Type":"ContainerDied","Data":"781973c7def4ba8eced141474c676f7caa807b186fa4345fd20ce00cb3534afb"} Nov 23 08:45:04 crc kubenswrapper[5028]: I1123 08:45:04.332197 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781973c7def4ba8eced141474c676f7caa807b186fa4345fd20ce00cb3534afb" Nov 23 08:45:04 crc kubenswrapper[5028]: I1123 08:45:04.331750 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm" Nov 23 08:45:04 crc kubenswrapper[5028]: I1123 08:45:04.374922 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf"] Nov 23 08:45:04 crc kubenswrapper[5028]: I1123 08:45:04.384942 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398080-xfncf"] Nov 23 08:45:05 crc kubenswrapper[5028]: I1123 08:45:05.076274 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" path="/var/lib/kubelet/pods/454d3ec2-4984-445c-bf59-e6a459c69a2b/volumes" Nov 23 08:45:05 crc kubenswrapper[5028]: I1123 08:45:05.077051 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe0a787-6771-41dd-a8a4-32a53fe4c5ef" path="/var/lib/kubelet/pods/7fe0a787-6771-41dd-a8a4-32a53fe4c5ef/volumes" Nov 23 08:45:05 crc kubenswrapper[5028]: I1123 08:45:05.513347 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:45:05 crc kubenswrapper[5028]: I1123 08:45:05.513830 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:45:05 crc kubenswrapper[5028]: I1123 08:45:05.547374 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:45:05 crc kubenswrapper[5028]: I1123 08:45:05.595171 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:45:06 crc kubenswrapper[5028]: I1123 08:45:06.354226 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:45:06 crc kubenswrapper[5028]: I1123 08:45:06.354279 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:45:07 crc kubenswrapper[5028]: I1123 08:45:07.666711 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:07 crc kubenswrapper[5028]: I1123 08:45:07.668751 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:07 crc kubenswrapper[5028]: I1123 08:45:07.700098 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:07 crc kubenswrapper[5028]: I1123 08:45:07.726188 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:08 crc kubenswrapper[5028]: I1123 08:45:08.377916 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:08 crc kubenswrapper[5028]: I1123 08:45:08.378043 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:08 crc kubenswrapper[5028]: I1123 08:45:08.433040 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:45:08 crc kubenswrapper[5028]: I1123 08:45:08.433169 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 08:45:08 crc kubenswrapper[5028]: I1123 08:45:08.803220 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:45:10 crc kubenswrapper[5028]: I1123 08:45:10.369887 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:10 crc kubenswrapper[5028]: I1123 08:45:10.396988 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 08:45:10 crc kubenswrapper[5028]: I1123 08:45:10.617228 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:45:13 crc kubenswrapper[5028]: I1123 08:45:13.053799 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:45:13 crc kubenswrapper[5028]: E1123 08:45:13.054599 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.788866 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xv4d8"] Nov 23 08:45:16 crc kubenswrapper[5028]: E1123 08:45:16.789932 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerName="dnsmasq-dns" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.789965 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerName="dnsmasq-dns" Nov 23 08:45:16 crc kubenswrapper[5028]: E1123 08:45:16.789981 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bf389f-30a9-4a74-a931-e8a28b61f7f6" containerName="collect-profiles" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.789988 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bf389f-30a9-4a74-a931-e8a28b61f7f6" containerName="collect-profiles" Nov 23 08:45:16 crc kubenswrapper[5028]: E1123 08:45:16.790001 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerName="init" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.790008 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerName="init" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.790173 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="454d3ec2-4984-445c-bf59-e6a459c69a2b" containerName="dnsmasq-dns" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.790191 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bf389f-30a9-4a74-a931-e8a28b61f7f6" containerName="collect-profiles" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.790873 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.807491 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xv4d8"] Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.857926 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzqbf\" (UniqueName: \"kubernetes.io/projected/b3994798-d137-4748-8b00-fc218bfe4481-kube-api-access-wzqbf\") pod \"placement-db-create-xv4d8\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.858051 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3994798-d137-4748-8b00-fc218bfe4481-operator-scripts\") pod \"placement-db-create-xv4d8\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.909786 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bcda-account-create-xw4g8"] Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.923083 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.928070 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.947215 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bcda-account-create-xw4g8"] Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.960809 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drwz\" (UniqueName: \"kubernetes.io/projected/fe52fd0d-7df3-41ba-9339-0789c17b27c2-kube-api-access-8drwz\") pod \"placement-bcda-account-create-xw4g8\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.960876 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzqbf\" (UniqueName: \"kubernetes.io/projected/b3994798-d137-4748-8b00-fc218bfe4481-kube-api-access-wzqbf\") pod \"placement-db-create-xv4d8\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.960967 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52fd0d-7df3-41ba-9339-0789c17b27c2-operator-scripts\") pod \"placement-bcda-account-create-xw4g8\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.961164 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3994798-d137-4748-8b00-fc218bfe4481-operator-scripts\") pod \"placement-db-create-xv4d8\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.962142 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3994798-d137-4748-8b00-fc218bfe4481-operator-scripts\") pod \"placement-db-create-xv4d8\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:16 crc kubenswrapper[5028]: I1123 08:45:16.983315 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzqbf\" (UniqueName: \"kubernetes.io/projected/b3994798-d137-4748-8b00-fc218bfe4481-kube-api-access-wzqbf\") pod \"placement-db-create-xv4d8\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.063772 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drwz\" (UniqueName: \"kubernetes.io/projected/fe52fd0d-7df3-41ba-9339-0789c17b27c2-kube-api-access-8drwz\") pod \"placement-bcda-account-create-xw4g8\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.063938 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52fd0d-7df3-41ba-9339-0789c17b27c2-operator-scripts\") pod \"placement-bcda-account-create-xw4g8\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.065082 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52fd0d-7df3-41ba-9339-0789c17b27c2-operator-scripts\") pod \"placement-bcda-account-create-xw4g8\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.086607 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drwz\" (UniqueName: \"kubernetes.io/projected/fe52fd0d-7df3-41ba-9339-0789c17b27c2-kube-api-access-8drwz\") pod \"placement-bcda-account-create-xw4g8\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.138618 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.247336 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.681784 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xv4d8"] Nov 23 08:45:17 crc kubenswrapper[5028]: W1123 08:45:17.686164 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3994798_d137_4748_8b00_fc218bfe4481.slice/crio-72b9d24fb94d051581b11fe7cfdf27ee1c6c0e803d78b95a3bbe5f001fddf95d WatchSource:0}: Error finding container 72b9d24fb94d051581b11fe7cfdf27ee1c6c0e803d78b95a3bbe5f001fddf95d: Status 404 returned error can't find the container with id 72b9d24fb94d051581b11fe7cfdf27ee1c6c0e803d78b95a3bbe5f001fddf95d Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.757899 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bcda-account-create-xw4g8"] Nov 23 08:45:17 crc kubenswrapper[5028]: W1123 08:45:17.760882 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe52fd0d_7df3_41ba_9339_0789c17b27c2.slice/crio-f7fc8c39fe8c4f66edfb5722ddd449e2389e59f6d585504576cd71f2dc910647 WatchSource:0}: Error finding container f7fc8c39fe8c4f66edfb5722ddd449e2389e59f6d585504576cd71f2dc910647: Status 404 returned error can't find the container with id f7fc8c39fe8c4f66edfb5722ddd449e2389e59f6d585504576cd71f2dc910647 Nov 23 08:45:17 crc kubenswrapper[5028]: I1123 08:45:17.767165 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 23 08:45:18 crc kubenswrapper[5028]: I1123 08:45:18.509078 5028 generic.go:334] "Generic (PLEG): container finished" podID="fe52fd0d-7df3-41ba-9339-0789c17b27c2" containerID="018aefdeccc74c1cea3f0edda3f7c5d4ddfc9e6d0a02b788ff44ae88a5f91a09" exitCode=0 Nov 23 08:45:18 crc kubenswrapper[5028]: I1123 08:45:18.509187 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcda-account-create-xw4g8" event={"ID":"fe52fd0d-7df3-41ba-9339-0789c17b27c2","Type":"ContainerDied","Data":"018aefdeccc74c1cea3f0edda3f7c5d4ddfc9e6d0a02b788ff44ae88a5f91a09"} Nov 23 08:45:18 crc kubenswrapper[5028]: I1123 08:45:18.509219 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcda-account-create-xw4g8" event={"ID":"fe52fd0d-7df3-41ba-9339-0789c17b27c2","Type":"ContainerStarted","Data":"f7fc8c39fe8c4f66edfb5722ddd449e2389e59f6d585504576cd71f2dc910647"} Nov 23 08:45:18 crc kubenswrapper[5028]: I1123 08:45:18.511527 5028 generic.go:334] "Generic (PLEG): container finished" podID="b3994798-d137-4748-8b00-fc218bfe4481" containerID="37d15c5206eac3e3e22ec4e6a2865467410d5060d9cd187357317e950df38882" exitCode=0 Nov 23 08:45:18 crc kubenswrapper[5028]: I1123 08:45:18.511578 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xv4d8" event={"ID":"b3994798-d137-4748-8b00-fc218bfe4481","Type":"ContainerDied","Data":"37d15c5206eac3e3e22ec4e6a2865467410d5060d9cd187357317e950df38882"} Nov 23 08:45:18 crc kubenswrapper[5028]: I1123 08:45:18.511607 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xv4d8" event={"ID":"b3994798-d137-4748-8b00-fc218bfe4481","Type":"ContainerStarted","Data":"72b9d24fb94d051581b11fe7cfdf27ee1c6c0e803d78b95a3bbe5f001fddf95d"} Nov 23 08:45:19 crc kubenswrapper[5028]: I1123 08:45:19.949438 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:19 crc kubenswrapper[5028]: I1123 08:45:19.959107 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.128888 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52fd0d-7df3-41ba-9339-0789c17b27c2-operator-scripts\") pod \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.128996 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8drwz\" (UniqueName: \"kubernetes.io/projected/fe52fd0d-7df3-41ba-9339-0789c17b27c2-kube-api-access-8drwz\") pod \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\" (UID: \"fe52fd0d-7df3-41ba-9339-0789c17b27c2\") " Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.129048 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzqbf\" (UniqueName: \"kubernetes.io/projected/b3994798-d137-4748-8b00-fc218bfe4481-kube-api-access-wzqbf\") pod \"b3994798-d137-4748-8b00-fc218bfe4481\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.129096 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3994798-d137-4748-8b00-fc218bfe4481-operator-scripts\") pod \"b3994798-d137-4748-8b00-fc218bfe4481\" (UID: \"b3994798-d137-4748-8b00-fc218bfe4481\") " Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.130120 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe52fd0d-7df3-41ba-9339-0789c17b27c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe52fd0d-7df3-41ba-9339-0789c17b27c2" (UID: "fe52fd0d-7df3-41ba-9339-0789c17b27c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.130137 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3994798-d137-4748-8b00-fc218bfe4481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3994798-d137-4748-8b00-fc218bfe4481" (UID: "b3994798-d137-4748-8b00-fc218bfe4481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.130565 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe52fd0d-7df3-41ba-9339-0789c17b27c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.130599 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3994798-d137-4748-8b00-fc218bfe4481-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.138021 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe52fd0d-7df3-41ba-9339-0789c17b27c2-kube-api-access-8drwz" (OuterVolumeSpecName: "kube-api-access-8drwz") pod "fe52fd0d-7df3-41ba-9339-0789c17b27c2" (UID: "fe52fd0d-7df3-41ba-9339-0789c17b27c2"). InnerVolumeSpecName "kube-api-access-8drwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.138319 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3994798-d137-4748-8b00-fc218bfe4481-kube-api-access-wzqbf" (OuterVolumeSpecName: "kube-api-access-wzqbf") pod "b3994798-d137-4748-8b00-fc218bfe4481" (UID: "b3994798-d137-4748-8b00-fc218bfe4481"). InnerVolumeSpecName "kube-api-access-wzqbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.232831 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8drwz\" (UniqueName: \"kubernetes.io/projected/fe52fd0d-7df3-41ba-9339-0789c17b27c2-kube-api-access-8drwz\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.232882 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzqbf\" (UniqueName: \"kubernetes.io/projected/b3994798-d137-4748-8b00-fc218bfe4481-kube-api-access-wzqbf\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.542810 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcda-account-create-xw4g8" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.542782 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcda-account-create-xw4g8" event={"ID":"fe52fd0d-7df3-41ba-9339-0789c17b27c2","Type":"ContainerDied","Data":"f7fc8c39fe8c4f66edfb5722ddd449e2389e59f6d585504576cd71f2dc910647"} Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.543485 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fc8c39fe8c4f66edfb5722ddd449e2389e59f6d585504576cd71f2dc910647" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.546457 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xv4d8" event={"ID":"b3994798-d137-4748-8b00-fc218bfe4481","Type":"ContainerDied","Data":"72b9d24fb94d051581b11fe7cfdf27ee1c6c0e803d78b95a3bbe5f001fddf95d"} Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.546505 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b9d24fb94d051581b11fe7cfdf27ee1c6c0e803d78b95a3bbe5f001fddf95d" Nov 23 08:45:20 crc kubenswrapper[5028]: I1123 08:45:20.546567 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xv4d8" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.370176 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-df64d79bf-rn84r"] Nov 23 08:45:22 crc kubenswrapper[5028]: E1123 08:45:22.370621 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3994798-d137-4748-8b00-fc218bfe4481" containerName="mariadb-database-create" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.370634 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3994798-d137-4748-8b00-fc218bfe4481" containerName="mariadb-database-create" Nov 23 08:45:22 crc kubenswrapper[5028]: E1123 08:45:22.370666 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe52fd0d-7df3-41ba-9339-0789c17b27c2" containerName="mariadb-account-create" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.370673 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe52fd0d-7df3-41ba-9339-0789c17b27c2" containerName="mariadb-account-create" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.370857 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe52fd0d-7df3-41ba-9339-0789c17b27c2" containerName="mariadb-account-create" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.370875 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3994798-d137-4748-8b00-fc218bfe4481" containerName="mariadb-database-create" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.375280 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.399138 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df64d79bf-rn84r"] Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.410792 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-thhmt"] Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.412130 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.414588 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.415380 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.415748 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rwpvt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.470245 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-thhmt"] Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.477859 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-config\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.477930 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-425hl\" (UniqueName: \"kubernetes.io/projected/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-kube-api-access-425hl\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.477979 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-nb\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.478032 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-sb\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.478113 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-dns-svc\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.579490 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-sb\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.579566 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-combined-ca-bundle\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.579611 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-scripts\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.579632 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv69x\" (UniqueName: \"kubernetes.io/projected/1d276c8a-cc60-421c-95d7-4182305d9e52-kube-api-access-xv69x\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.579664 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-dns-svc\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.579866 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-config-data\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.580021 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-config\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.580082 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d276c8a-cc60-421c-95d7-4182305d9e52-logs\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.580133 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-425hl\" (UniqueName: \"kubernetes.io/projected/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-kube-api-access-425hl\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.580158 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-nb\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.580621 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-dns-svc\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.580676 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-sb\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.581197 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-nb\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.581285 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-config\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.605507 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-425hl\" (UniqueName: \"kubernetes.io/projected/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-kube-api-access-425hl\") pod \"dnsmasq-dns-df64d79bf-rn84r\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.681452 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-combined-ca-bundle\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.681517 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-scripts\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.681544 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv69x\" (UniqueName: \"kubernetes.io/projected/1d276c8a-cc60-421c-95d7-4182305d9e52-kube-api-access-xv69x\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.681597 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-config-data\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.681651 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d276c8a-cc60-421c-95d7-4182305d9e52-logs\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.682237 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d276c8a-cc60-421c-95d7-4182305d9e52-logs\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.685781 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-combined-ca-bundle\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.690583 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-scripts\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.691114 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-config-data\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.698224 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.707630 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv69x\" (UniqueName: \"kubernetes.io/projected/1d276c8a-cc60-421c-95d7-4182305d9e52-kube-api-access-xv69x\") pod \"placement-db-sync-thhmt\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:22 crc kubenswrapper[5028]: I1123 08:45:22.732207 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:23 crc kubenswrapper[5028]: I1123 08:45:23.046123 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-df64d79bf-rn84r"] Nov 23 08:45:23 crc kubenswrapper[5028]: I1123 08:45:23.295437 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-thhmt"] Nov 23 08:45:23 crc kubenswrapper[5028]: W1123 08:45:23.297879 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d276c8a_cc60_421c_95d7_4182305d9e52.slice/crio-4a86940fe88006cb7b91d77298e6529fc508132ddd3d23d7c8c5206042329ae4 WatchSource:0}: Error finding container 4a86940fe88006cb7b91d77298e6529fc508132ddd3d23d7c8c5206042329ae4: Status 404 returned error can't find the container with id 4a86940fe88006cb7b91d77298e6529fc508132ddd3d23d7c8c5206042329ae4 Nov 23 08:45:23 crc kubenswrapper[5028]: I1123 08:45:23.576539 5028 generic.go:334] "Generic (PLEG): container finished" podID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerID="8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c" exitCode=0 Nov 23 08:45:23 crc kubenswrapper[5028]: I1123 08:45:23.576634 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" event={"ID":"92edd648-c9d1-49df-8d5e-ab22f0e96a9b","Type":"ContainerDied","Data":"8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c"} Nov 23 08:45:23 crc kubenswrapper[5028]: I1123 08:45:23.576673 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" event={"ID":"92edd648-c9d1-49df-8d5e-ab22f0e96a9b","Type":"ContainerStarted","Data":"c3e41d7daadf2d033c56c399f422e7ab2d9a446028d4741ef4eec229a8ed28e7"} Nov 23 08:45:23 crc kubenswrapper[5028]: I1123 08:45:23.579018 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thhmt" event={"ID":"1d276c8a-cc60-421c-95d7-4182305d9e52","Type":"ContainerStarted","Data":"4a86940fe88006cb7b91d77298e6529fc508132ddd3d23d7c8c5206042329ae4"} Nov 23 08:45:24 crc kubenswrapper[5028]: I1123 08:45:24.593109 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" event={"ID":"92edd648-c9d1-49df-8d5e-ab22f0e96a9b","Type":"ContainerStarted","Data":"ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d"} Nov 23 08:45:24 crc kubenswrapper[5028]: I1123 08:45:24.593464 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:24 crc kubenswrapper[5028]: I1123 08:45:24.622507 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" podStartSLOduration=2.62247991 podStartE2EDuration="2.62247991s" podCreationTimestamp="2025-11-23 08:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:24.617670762 +0000 UTC m=+6908.315075541" watchObservedRunningTime="2025-11-23 08:45:24.62247991 +0000 UTC m=+6908.319884689" Nov 23 08:45:25 crc kubenswrapper[5028]: I1123 08:45:25.054517 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:45:25 crc kubenswrapper[5028]: E1123 08:45:25.055487 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:45:25 crc kubenswrapper[5028]: I1123 08:45:25.775988 5028 scope.go:117] "RemoveContainer" containerID="8b9895524d07ca5d47c52dcb30b556d2d00ab23367b4d2e3a3ce76c7eaf8cff0" Nov 23 08:45:26 crc kubenswrapper[5028]: I1123 08:45:26.752143 5028 scope.go:117] "RemoveContainer" containerID="2c60705715c668f7a5bb9de5fcafff5ce67e7caef599faca9140a006a9ae1080" Nov 23 08:45:27 crc kubenswrapper[5028]: I1123 08:45:27.632480 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thhmt" event={"ID":"1d276c8a-cc60-421c-95d7-4182305d9e52","Type":"ContainerStarted","Data":"23f45590c1c197ab642a89db4f0190cb22f329e2b3e8fded1b4be393f6d89e62"} Nov 23 08:45:27 crc kubenswrapper[5028]: I1123 08:45:27.661522 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-thhmt" podStartSLOduration=2.166412137 podStartE2EDuration="5.661497417s" podCreationTimestamp="2025-11-23 08:45:22 +0000 UTC" firstStartedPulling="2025-11-23 08:45:23.300170879 +0000 UTC m=+6906.997575658" lastFinishedPulling="2025-11-23 08:45:26.795256119 +0000 UTC m=+6910.492660938" observedRunningTime="2025-11-23 08:45:27.653059799 +0000 UTC m=+6911.350464578" watchObservedRunningTime="2025-11-23 08:45:27.661497417 +0000 UTC m=+6911.358902216" Nov 23 08:45:28 crc kubenswrapper[5028]: I1123 08:45:28.648214 5028 generic.go:334] "Generic (PLEG): container finished" podID="1d276c8a-cc60-421c-95d7-4182305d9e52" containerID="23f45590c1c197ab642a89db4f0190cb22f329e2b3e8fded1b4be393f6d89e62" exitCode=0 Nov 23 08:45:28 crc kubenswrapper[5028]: I1123 08:45:28.648292 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thhmt" event={"ID":"1d276c8a-cc60-421c-95d7-4182305d9e52","Type":"ContainerDied","Data":"23f45590c1c197ab642a89db4f0190cb22f329e2b3e8fded1b4be393f6d89e62"} Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.086550 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.154842 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-scripts\") pod \"1d276c8a-cc60-421c-95d7-4182305d9e52\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.155538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-config-data\") pod \"1d276c8a-cc60-421c-95d7-4182305d9e52\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.155598 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-combined-ca-bundle\") pod \"1d276c8a-cc60-421c-95d7-4182305d9e52\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.155713 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv69x\" (UniqueName: \"kubernetes.io/projected/1d276c8a-cc60-421c-95d7-4182305d9e52-kube-api-access-xv69x\") pod \"1d276c8a-cc60-421c-95d7-4182305d9e52\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.155924 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d276c8a-cc60-421c-95d7-4182305d9e52-logs\") pod \"1d276c8a-cc60-421c-95d7-4182305d9e52\" (UID: \"1d276c8a-cc60-421c-95d7-4182305d9e52\") " Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.156744 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d276c8a-cc60-421c-95d7-4182305d9e52-logs" (OuterVolumeSpecName: "logs") pod "1d276c8a-cc60-421c-95d7-4182305d9e52" (UID: "1d276c8a-cc60-421c-95d7-4182305d9e52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.158828 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d276c8a-cc60-421c-95d7-4182305d9e52-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.163787 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d276c8a-cc60-421c-95d7-4182305d9e52-kube-api-access-xv69x" (OuterVolumeSpecName: "kube-api-access-xv69x") pod "1d276c8a-cc60-421c-95d7-4182305d9e52" (UID: "1d276c8a-cc60-421c-95d7-4182305d9e52"). InnerVolumeSpecName "kube-api-access-xv69x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.165684 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-scripts" (OuterVolumeSpecName: "scripts") pod "1d276c8a-cc60-421c-95d7-4182305d9e52" (UID: "1d276c8a-cc60-421c-95d7-4182305d9e52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.190205 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d276c8a-cc60-421c-95d7-4182305d9e52" (UID: "1d276c8a-cc60-421c-95d7-4182305d9e52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.190668 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-config-data" (OuterVolumeSpecName: "config-data") pod "1d276c8a-cc60-421c-95d7-4182305d9e52" (UID: "1d276c8a-cc60-421c-95d7-4182305d9e52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.260412 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.260457 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.260470 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d276c8a-cc60-421c-95d7-4182305d9e52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.260485 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv69x\" (UniqueName: \"kubernetes.io/projected/1d276c8a-cc60-421c-95d7-4182305d9e52-kube-api-access-xv69x\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.674480 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-thhmt" event={"ID":"1d276c8a-cc60-421c-95d7-4182305d9e52","Type":"ContainerDied","Data":"4a86940fe88006cb7b91d77298e6529fc508132ddd3d23d7c8c5206042329ae4"} Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.674578 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a86940fe88006cb7b91d77298e6529fc508132ddd3d23d7c8c5206042329ae4" Nov 23 08:45:30 crc kubenswrapper[5028]: I1123 08:45:30.674709 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-thhmt" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.258212 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d958d448-ghwp2"] Nov 23 08:45:31 crc kubenswrapper[5028]: E1123 08:45:31.258805 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d276c8a-cc60-421c-95d7-4182305d9e52" containerName="placement-db-sync" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.258827 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d276c8a-cc60-421c-95d7-4182305d9e52" containerName="placement-db-sync" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.259092 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d276c8a-cc60-421c-95d7-4182305d9e52" containerName="placement-db-sync" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.260475 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.272346 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.276893 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rwpvt" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.279613 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.281762 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d958d448-ghwp2"] Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.282960 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-combined-ca-bundle\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.283000 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-scripts\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.283074 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f333e06c-4c38-44d0-a316-6f4882382b73-logs\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.283147 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfkf\" (UniqueName: \"kubernetes.io/projected/f333e06c-4c38-44d0-a316-6f4882382b73-kube-api-access-nqfkf\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.283180 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-config-data\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.385800 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfkf\" (UniqueName: \"kubernetes.io/projected/f333e06c-4c38-44d0-a316-6f4882382b73-kube-api-access-nqfkf\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.385878 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-config-data\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.385934 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-combined-ca-bundle\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.385975 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-scripts\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.386044 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f333e06c-4c38-44d0-a316-6f4882382b73-logs\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.390391 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f333e06c-4c38-44d0-a316-6f4882382b73-logs\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.409263 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfkf\" (UniqueName: \"kubernetes.io/projected/f333e06c-4c38-44d0-a316-6f4882382b73-kube-api-access-nqfkf\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.409727 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-config-data\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.412716 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-combined-ca-bundle\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.413642 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f333e06c-4c38-44d0-a316-6f4882382b73-scripts\") pod \"placement-6d958d448-ghwp2\" (UID: \"f333e06c-4c38-44d0-a316-6f4882382b73\") " pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:31 crc kubenswrapper[5028]: I1123 08:45:31.627506 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.127485 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d958d448-ghwp2"] Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.699258 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.710079 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d958d448-ghwp2" event={"ID":"f333e06c-4c38-44d0-a316-6f4882382b73","Type":"ContainerStarted","Data":"ebf036e6de51c598afb1d10ebe35574cb67606d9f88539fc8c314bb3552ac204"} Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.710136 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d958d448-ghwp2" event={"ID":"f333e06c-4c38-44d0-a316-6f4882382b73","Type":"ContainerStarted","Data":"13775b32f656797301dc7c47ca74190c60165822ba501cff33dbd3b31ebdbb9f"} Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.710151 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d958d448-ghwp2" event={"ID":"f333e06c-4c38-44d0-a316-6f4882382b73","Type":"ContainerStarted","Data":"19853861d6108582876eefe2dcfadaa6dfff422d10dca214428294c2c1410ee5"} Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.711122 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.711168 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.782666 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84649649-x9ktp"] Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.783285 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerName="dnsmasq-dns" containerID="cri-o://762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a" gracePeriod=10 Nov 23 08:45:32 crc kubenswrapper[5028]: I1123 08:45:32.788547 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d958d448-ghwp2" podStartSLOduration=1.7885130839999999 podStartE2EDuration="1.788513084s" podCreationTimestamp="2025-11-23 08:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:45:32.783222274 +0000 UTC m=+6916.480627063" watchObservedRunningTime="2025-11-23 08:45:32.788513084 +0000 UTC m=+6916.485917893" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.466377 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.566856 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-config\") pod \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.566930 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-dns-svc\") pod \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.566977 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-sb\") pod \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.567019 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-nb\") pod \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.567157 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lj8x\" (UniqueName: \"kubernetes.io/projected/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-kube-api-access-8lj8x\") pod \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\" (UID: \"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe\") " Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.577284 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-kube-api-access-8lj8x" (OuterVolumeSpecName: "kube-api-access-8lj8x") pod "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" (UID: "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe"). InnerVolumeSpecName "kube-api-access-8lj8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.615225 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" (UID: "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.626751 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" (UID: "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.639408 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" (UID: "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.650840 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-config" (OuterVolumeSpecName: "config") pod "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" (UID: "f6f972d3-42b5-4ee8-8fe0-26e918ad10fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.669565 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.669606 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.669617 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.669629 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lj8x\" (UniqueName: \"kubernetes.io/projected/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-kube-api-access-8lj8x\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.669639 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.724561 5028 generic.go:334] "Generic (PLEG): container finished" podID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerID="762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a" exitCode=0 Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.725736 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.725854 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" event={"ID":"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe","Type":"ContainerDied","Data":"762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a"} Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.725901 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84649649-x9ktp" event={"ID":"f6f972d3-42b5-4ee8-8fe0-26e918ad10fe","Type":"ContainerDied","Data":"90cd28d31c60ccafbd10f95e98431a4a3cef886ed62a109f336f905d5100af02"} Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.725926 5028 scope.go:117] "RemoveContainer" containerID="762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.753607 5028 scope.go:117] "RemoveContainer" containerID="4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.769439 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84649649-x9ktp"] Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.780059 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84649649-x9ktp"] Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.792906 5028 scope.go:117] "RemoveContainer" containerID="762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a" Nov 23 08:45:33 crc kubenswrapper[5028]: E1123 08:45:33.793389 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a\": container with ID starting with 762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a not found: ID does not exist" containerID="762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.793425 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a"} err="failed to get container status \"762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a\": rpc error: code = NotFound desc = could not find container \"762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a\": container with ID starting with 762944324c7544a7a72041a60acaa6f862c6696d6330799ae8bbaf1395a34b0a not found: ID does not exist" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.793455 5028 scope.go:117] "RemoveContainer" containerID="4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d" Nov 23 08:45:33 crc kubenswrapper[5028]: E1123 08:45:33.793925 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d\": container with ID starting with 4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d not found: ID does not exist" containerID="4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d" Nov 23 08:45:33 crc kubenswrapper[5028]: I1123 08:45:33.794104 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d"} err="failed to get container status \"4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d\": rpc error: code = NotFound desc = could not find container \"4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d\": container with ID starting with 4f461fe11e1d94661ef4577b439cbc40e79d75b5dd3b4ecff24eac9ec6c6076d not found: ID does not exist" Nov 23 08:45:35 crc kubenswrapper[5028]: I1123 08:45:35.064315 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" path="/var/lib/kubelet/pods/f6f972d3-42b5-4ee8-8fe0-26e918ad10fe/volumes" Nov 23 08:45:36 crc kubenswrapper[5028]: I1123 08:45:36.054114 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:45:36 crc kubenswrapper[5028]: E1123 08:45:36.055023 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:45:48 crc kubenswrapper[5028]: I1123 08:45:48.054384 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:45:48 crc kubenswrapper[5028]: E1123 08:45:48.056599 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:46:01 crc kubenswrapper[5028]: I1123 08:46:01.054457 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:46:01 crc kubenswrapper[5028]: E1123 08:46:01.055793 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:46:02 crc kubenswrapper[5028]: I1123 08:46:02.682292 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:46:02 crc kubenswrapper[5028]: I1123 08:46:02.684576 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d958d448-ghwp2" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.632889 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rmxxv"] Nov 23 08:46:10 crc kubenswrapper[5028]: E1123 08:46:10.635816 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerName="dnsmasq-dns" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.635896 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerName="dnsmasq-dns" Nov 23 08:46:10 crc kubenswrapper[5028]: E1123 08:46:10.636013 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerName="init" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.636609 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerName="init" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.637028 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f972d3-42b5-4ee8-8fe0-26e918ad10fe" containerName="dnsmasq-dns" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.639644 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.674057 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmxxv"] Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.735728 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-catalog-content\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.736099 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-utilities\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.736281 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnlbl\" (UniqueName: \"kubernetes.io/projected/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-kube-api-access-wnlbl\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.838253 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnlbl\" (UniqueName: \"kubernetes.io/projected/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-kube-api-access-wnlbl\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.838436 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-catalog-content\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.838480 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-utilities\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.839232 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-utilities\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.839622 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-catalog-content\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.868806 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnlbl\" (UniqueName: \"kubernetes.io/projected/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-kube-api-access-wnlbl\") pod \"redhat-operators-rmxxv\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:10 crc kubenswrapper[5028]: I1123 08:46:10.981926 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:11 crc kubenswrapper[5028]: I1123 08:46:11.533745 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rmxxv"] Nov 23 08:46:12 crc kubenswrapper[5028]: I1123 08:46:12.250853 5028 generic.go:334] "Generic (PLEG): container finished" podID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerID="f36c9394a22ff1311598539fdf5842cabbdb3c73acdb346c44c9c4eabfaaa896" exitCode=0 Nov 23 08:46:12 crc kubenswrapper[5028]: I1123 08:46:12.250982 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerDied","Data":"f36c9394a22ff1311598539fdf5842cabbdb3c73acdb346c44c9c4eabfaaa896"} Nov 23 08:46:12 crc kubenswrapper[5028]: I1123 08:46:12.251308 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerStarted","Data":"1f7db5e31598060ec9b43356ae3abd2471ec6949d912e25e106935ebbc33c59f"} Nov 23 08:46:13 crc kubenswrapper[5028]: I1123 08:46:13.265733 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerStarted","Data":"bad34ab57f5e1a0057fbfbc3a4c58315f1eb594fd83dd8c2929ea76b10c94581"} Nov 23 08:46:15 crc kubenswrapper[5028]: I1123 08:46:15.287825 5028 generic.go:334] "Generic (PLEG): container finished" podID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerID="bad34ab57f5e1a0057fbfbc3a4c58315f1eb594fd83dd8c2929ea76b10c94581" exitCode=0 Nov 23 08:46:15 crc kubenswrapper[5028]: I1123 08:46:15.287934 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerDied","Data":"bad34ab57f5e1a0057fbfbc3a4c58315f1eb594fd83dd8c2929ea76b10c94581"} Nov 23 08:46:15 crc kubenswrapper[5028]: I1123 08:46:15.292530 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:46:16 crc kubenswrapper[5028]: I1123 08:46:16.053445 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:46:16 crc kubenswrapper[5028]: E1123 08:46:16.054894 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:46:16 crc kubenswrapper[5028]: I1123 08:46:16.305899 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerStarted","Data":"78dea01ec8e212a4f7207e1703bc37fb68f2690f12cc3ddd589f27d88a0f9d2d"} Nov 23 08:46:16 crc kubenswrapper[5028]: I1123 08:46:16.330520 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rmxxv" podStartSLOduration=2.861436535 podStartE2EDuration="6.330494334s" podCreationTimestamp="2025-11-23 08:46:10 +0000 UTC" firstStartedPulling="2025-11-23 08:46:12.254426253 +0000 UTC m=+6955.951831032" lastFinishedPulling="2025-11-23 08:46:15.723484032 +0000 UTC m=+6959.420888831" observedRunningTime="2025-11-23 08:46:16.329027568 +0000 UTC m=+6960.026432357" watchObservedRunningTime="2025-11-23 08:46:16.330494334 +0000 UTC m=+6960.027899113" Nov 23 08:46:20 crc kubenswrapper[5028]: I1123 08:46:20.982236 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:20 crc kubenswrapper[5028]: I1123 08:46:20.982750 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:22 crc kubenswrapper[5028]: I1123 08:46:22.045094 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmxxv" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" probeResult="failure" output=< Nov 23 08:46:22 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 08:46:22 crc kubenswrapper[5028]: > Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.070217 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2lkz7"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.072576 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.083647 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2lkz7"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.136854 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-operator-scripts\") pod \"nova-api-db-create-2lkz7\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.137015 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89bj\" (UniqueName: \"kubernetes.io/projected/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-kube-api-access-t89bj\") pod \"nova-api-db-create-2lkz7\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.156516 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ppnb4"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.158239 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.174567 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ppnb4"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.238344 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-operator-scripts\") pod \"nova-api-db-create-2lkz7\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.238717 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89bj\" (UniqueName: \"kubernetes.io/projected/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-kube-api-access-t89bj\") pod \"nova-api-db-create-2lkz7\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.239283 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-operator-scripts\") pod \"nova-api-db-create-2lkz7\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.259507 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4b15-account-create-67t6s"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.260730 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.264305 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.271081 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89bj\" (UniqueName: \"kubernetes.io/projected/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-kube-api-access-t89bj\") pod \"nova-api-db-create-2lkz7\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.288693 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4b15-account-create-67t6s"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.340937 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-operator-scripts\") pod \"nova-cell0-db-create-ppnb4\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.341787 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kmk\" (UniqueName: \"kubernetes.io/projected/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-kube-api-access-l2kmk\") pod \"nova-cell0-db-create-ppnb4\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.355538 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-n8s49"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.360154 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.365346 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n8s49"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.394851 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.444238 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478d4566-77d2-4d75-91d0-c66ee03fbbdd-operator-scripts\") pod \"nova-cell1-db-create-n8s49\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.444308 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-operator-scripts\") pod \"nova-cell0-db-create-ppnb4\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.444370 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26449592-f0d6-466b-bdcf-ebd15f73bf27-operator-scripts\") pod \"nova-api-4b15-account-create-67t6s\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.444451 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/26449592-f0d6-466b-bdcf-ebd15f73bf27-kube-api-access-lqdxk\") pod \"nova-api-4b15-account-create-67t6s\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.444500 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kmk\" (UniqueName: \"kubernetes.io/projected/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-kube-api-access-l2kmk\") pod \"nova-cell0-db-create-ppnb4\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.444566 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t2tk\" (UniqueName: \"kubernetes.io/projected/478d4566-77d2-4d75-91d0-c66ee03fbbdd-kube-api-access-7t2tk\") pod \"nova-cell1-db-create-n8s49\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.445675 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-operator-scripts\") pod \"nova-cell0-db-create-ppnb4\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.469686 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1148-account-create-78mng"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.471750 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.474029 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.480355 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kmk\" (UniqueName: \"kubernetes.io/projected/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-kube-api-access-l2kmk\") pod \"nova-cell0-db-create-ppnb4\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.483403 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1148-account-create-78mng"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.554510 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t2tk\" (UniqueName: \"kubernetes.io/projected/478d4566-77d2-4d75-91d0-c66ee03fbbdd-kube-api-access-7t2tk\") pod \"nova-cell1-db-create-n8s49\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.554630 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478d4566-77d2-4d75-91d0-c66ee03fbbdd-operator-scripts\") pod \"nova-cell1-db-create-n8s49\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.554757 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26449592-f0d6-466b-bdcf-ebd15f73bf27-operator-scripts\") pod \"nova-api-4b15-account-create-67t6s\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.554911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/26449592-f0d6-466b-bdcf-ebd15f73bf27-kube-api-access-lqdxk\") pod \"nova-api-4b15-account-create-67t6s\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.555083 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-operator-scripts\") pod \"nova-cell0-1148-account-create-78mng\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.555140 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-kube-api-access-629th\") pod \"nova-cell0-1148-account-create-78mng\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.556184 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478d4566-77d2-4d75-91d0-c66ee03fbbdd-operator-scripts\") pod \"nova-cell1-db-create-n8s49\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.557374 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26449592-f0d6-466b-bdcf-ebd15f73bf27-operator-scripts\") pod \"nova-api-4b15-account-create-67t6s\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.573827 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t2tk\" (UniqueName: \"kubernetes.io/projected/478d4566-77d2-4d75-91d0-c66ee03fbbdd-kube-api-access-7t2tk\") pod \"nova-cell1-db-create-n8s49\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.577322 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/26449592-f0d6-466b-bdcf-ebd15f73bf27-kube-api-access-lqdxk\") pod \"nova-api-4b15-account-create-67t6s\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.625966 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.656932 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-operator-scripts\") pod \"nova-cell0-1148-account-create-78mng\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.657016 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-kube-api-access-629th\") pod \"nova-cell0-1148-account-create-78mng\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.658240 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-operator-scripts\") pod \"nova-cell0-1148-account-create-78mng\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.664764 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-283a-account-create-2pzrs"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.675771 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.680656 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-283a-account-create-2pzrs"] Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.680807 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.682642 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.684750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-kube-api-access-629th\") pod \"nova-cell0-1148-account-create-78mng\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.758841 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ff0ece-c044-4e7b-9efb-83805ed11901-operator-scripts\") pod \"nova-cell1-283a-account-create-2pzrs\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.759008 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95w2g\" (UniqueName: \"kubernetes.io/projected/69ff0ece-c044-4e7b-9efb-83805ed11901-kube-api-access-95w2g\") pod \"nova-cell1-283a-account-create-2pzrs\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:25 crc kubenswrapper[5028]: I1123 08:46:25.777312 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:25.862375 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95w2g\" (UniqueName: \"kubernetes.io/projected/69ff0ece-c044-4e7b-9efb-83805ed11901-kube-api-access-95w2g\") pod \"nova-cell1-283a-account-create-2pzrs\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:25.862562 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ff0ece-c044-4e7b-9efb-83805ed11901-operator-scripts\") pod \"nova-cell1-283a-account-create-2pzrs\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:25.863962 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ff0ece-c044-4e7b-9efb-83805ed11901-operator-scripts\") pod \"nova-cell1-283a-account-create-2pzrs\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:25.880616 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95w2g\" (UniqueName: \"kubernetes.io/projected/69ff0ece-c044-4e7b-9efb-83805ed11901-kube-api-access-95w2g\") pod \"nova-cell1-283a-account-create-2pzrs\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:25.932627 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2lkz7"] Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:25.936078 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.007056 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.187254 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4b15-account-create-67t6s"] Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.440589 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4b15-account-create-67t6s" event={"ID":"26449592-f0d6-466b-bdcf-ebd15f73bf27","Type":"ContainerStarted","Data":"093f01a7e8bb9082e931d6c2a3bb7ee8ee0cc657567438d3d787ebf17185c77e"} Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.446442 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lkz7" event={"ID":"79b109a7-3cb4-4f58-81b7-b6c9b44c1657","Type":"ContainerStarted","Data":"d30643b6f60afd88c8e406d72d49b33df3d8deae8ccc5b390d22ca815f57e3bf"} Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.446484 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lkz7" event={"ID":"79b109a7-3cb4-4f58-81b7-b6c9b44c1657","Type":"ContainerStarted","Data":"58380038eb73f74cadcaa3a192eb2bb9efcecb65c7d0ed9e70a03ed94266f369"} Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.468529 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-2lkz7" podStartSLOduration=1.4685081229999999 podStartE2EDuration="1.468508123s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:46:26.463345655 +0000 UTC m=+6970.160750424" watchObservedRunningTime="2025-11-23 08:46:26.468508123 +0000 UTC m=+6970.165912902" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.859771 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-n8s49"] Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.923280 5028 scope.go:117] "RemoveContainer" containerID="2fa8196c86108cbd58f2f965a955bd8d0600cf4180af89101b90ba9a1238f9d2" Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.949030 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-283a-account-create-2pzrs"] Nov 23 08:46:26 crc kubenswrapper[5028]: I1123 08:46:26.975750 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1148-account-create-78mng"] Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.005137 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ppnb4"] Nov 23 08:46:27 crc kubenswrapper[5028]: W1123 08:46:27.052801 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4af8f82a_ad6a_401b_880d_cd612c1fd9a6.slice/crio-bd5956f6f29fc0fe1ac81fa83c8d691e0ed5c196ce0c8c6a8999f92f54a277de WatchSource:0}: Error finding container bd5956f6f29fc0fe1ac81fa83c8d691e0ed5c196ce0c8c6a8999f92f54a277de: Status 404 returned error can't find the container with id bd5956f6f29fc0fe1ac81fa83c8d691e0ed5c196ce0c8c6a8999f92f54a277de Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.460089 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8s49" event={"ID":"478d4566-77d2-4d75-91d0-c66ee03fbbdd","Type":"ContainerStarted","Data":"a2a3e1e31143fc028d6d0792226bfcc76eb5aff8fc1ec0107dd0103a15ada710"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.460141 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8s49" event={"ID":"478d4566-77d2-4d75-91d0-c66ee03fbbdd","Type":"ContainerStarted","Data":"d6888ab3c72cad4f68a338ad6946627d6e94bf22ac5e22577a092ec953d77fde"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.462287 5028 generic.go:334] "Generic (PLEG): container finished" podID="79b109a7-3cb4-4f58-81b7-b6c9b44c1657" containerID="d30643b6f60afd88c8e406d72d49b33df3d8deae8ccc5b390d22ca815f57e3bf" exitCode=0 Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.462474 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lkz7" event={"ID":"79b109a7-3cb4-4f58-81b7-b6c9b44c1657","Type":"ContainerDied","Data":"d30643b6f60afd88c8e406d72d49b33df3d8deae8ccc5b390d22ca815f57e3bf"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.464076 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ppnb4" event={"ID":"4af8f82a-ad6a-401b-880d-cd612c1fd9a6","Type":"ContainerStarted","Data":"c966cf9a6d1cfa2ba571f9916b2abed30f1a7c56550696fee9cce5c422849aea"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.464106 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ppnb4" event={"ID":"4af8f82a-ad6a-401b-880d-cd612c1fd9a6","Type":"ContainerStarted","Data":"bd5956f6f29fc0fe1ac81fa83c8d691e0ed5c196ce0c8c6a8999f92f54a277de"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.473364 5028 generic.go:334] "Generic (PLEG): container finished" podID="26449592-f0d6-466b-bdcf-ebd15f73bf27" containerID="2da34d072f784460313c472ebcbe337286c27ed36bec29fd65edeb3a68f49214" exitCode=0 Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.473597 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4b15-account-create-67t6s" event={"ID":"26449592-f0d6-466b-bdcf-ebd15f73bf27","Type":"ContainerDied","Data":"2da34d072f784460313c472ebcbe337286c27ed36bec29fd65edeb3a68f49214"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.475852 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1148-account-create-78mng" event={"ID":"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2","Type":"ContainerStarted","Data":"c8501241c2999a4baf6d895e8e113d61704fd723bf0e2a3744d1f08a64768230"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.475887 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1148-account-create-78mng" event={"ID":"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2","Type":"ContainerStarted","Data":"60d365874ed41d9bd03a64bfaed1b8c974baaf60623a944ea059a8c75e3024a4"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.477907 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-283a-account-create-2pzrs" event={"ID":"69ff0ece-c044-4e7b-9efb-83805ed11901","Type":"ContainerStarted","Data":"850d87ba0879040d37d24d57bd385a6068b4d52e26e78663b1b1dd31bfdd45a8"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.477938 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-283a-account-create-2pzrs" event={"ID":"69ff0ece-c044-4e7b-9efb-83805ed11901","Type":"ContainerStarted","Data":"f416caf2860be9de6a4968641a4866c387f21b38ca94c74a8293d04d6ce4d640"} Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.498701 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1148-account-create-78mng" podStartSLOduration=2.498684388 podStartE2EDuration="2.498684388s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:46:27.497573901 +0000 UTC m=+6971.194978680" watchObservedRunningTime="2025-11-23 08:46:27.498684388 +0000 UTC m=+6971.196089167" Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.508258 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-n8s49" podStartSLOduration=2.5082374830000003 podStartE2EDuration="2.508237483s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:46:27.482968451 +0000 UTC m=+6971.180373230" watchObservedRunningTime="2025-11-23 08:46:27.508237483 +0000 UTC m=+6971.205642262" Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.537920 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-283a-account-create-2pzrs" podStartSLOduration=2.537890144 podStartE2EDuration="2.537890144s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:46:27.531469456 +0000 UTC m=+6971.228874235" watchObservedRunningTime="2025-11-23 08:46:27.537890144 +0000 UTC m=+6971.235294923" Nov 23 08:46:27 crc kubenswrapper[5028]: I1123 08:46:27.553238 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-ppnb4" podStartSLOduration=2.5532032510000002 podStartE2EDuration="2.553203251s" podCreationTimestamp="2025-11-23 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:46:27.550094684 +0000 UTC m=+6971.247499463" watchObservedRunningTime="2025-11-23 08:46:27.553203251 +0000 UTC m=+6971.250608030" Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.498530 5028 generic.go:334] "Generic (PLEG): container finished" podID="69ff0ece-c044-4e7b-9efb-83805ed11901" containerID="850d87ba0879040d37d24d57bd385a6068b4d52e26e78663b1b1dd31bfdd45a8" exitCode=0 Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.498634 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-283a-account-create-2pzrs" event={"ID":"69ff0ece-c044-4e7b-9efb-83805ed11901","Type":"ContainerDied","Data":"850d87ba0879040d37d24d57bd385a6068b4d52e26e78663b1b1dd31bfdd45a8"} Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.501972 5028 generic.go:334] "Generic (PLEG): container finished" podID="478d4566-77d2-4d75-91d0-c66ee03fbbdd" containerID="a2a3e1e31143fc028d6d0792226bfcc76eb5aff8fc1ec0107dd0103a15ada710" exitCode=0 Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.502133 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8s49" event={"ID":"478d4566-77d2-4d75-91d0-c66ee03fbbdd","Type":"ContainerDied","Data":"a2a3e1e31143fc028d6d0792226bfcc76eb5aff8fc1ec0107dd0103a15ada710"} Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.505146 5028 generic.go:334] "Generic (PLEG): container finished" podID="4af8f82a-ad6a-401b-880d-cd612c1fd9a6" containerID="c966cf9a6d1cfa2ba571f9916b2abed30f1a7c56550696fee9cce5c422849aea" exitCode=0 Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.505266 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ppnb4" event={"ID":"4af8f82a-ad6a-401b-880d-cd612c1fd9a6","Type":"ContainerDied","Data":"c966cf9a6d1cfa2ba571f9916b2abed30f1a7c56550696fee9cce5c422849aea"} Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.508475 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" containerID="c8501241c2999a4baf6d895e8e113d61704fd723bf0e2a3744d1f08a64768230" exitCode=0 Nov 23 08:46:28 crc kubenswrapper[5028]: I1123 08:46:28.508576 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1148-account-create-78mng" event={"ID":"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2","Type":"ContainerDied","Data":"c8501241c2999a4baf6d895e8e113d61704fd723bf0e2a3744d1f08a64768230"} Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.015907 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.024592 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.058774 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:46:29 crc kubenswrapper[5028]: E1123 08:46:29.059615 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.136344 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/26449592-f0d6-466b-bdcf-ebd15f73bf27-kube-api-access-lqdxk\") pod \"26449592-f0d6-466b-bdcf-ebd15f73bf27\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.136785 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-operator-scripts\") pod \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.136985 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26449592-f0d6-466b-bdcf-ebd15f73bf27-operator-scripts\") pod \"26449592-f0d6-466b-bdcf-ebd15f73bf27\" (UID: \"26449592-f0d6-466b-bdcf-ebd15f73bf27\") " Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.137128 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t89bj\" (UniqueName: \"kubernetes.io/projected/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-kube-api-access-t89bj\") pod \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\" (UID: \"79b109a7-3cb4-4f58-81b7-b6c9b44c1657\") " Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.138174 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79b109a7-3cb4-4f58-81b7-b6c9b44c1657" (UID: "79b109a7-3cb4-4f58-81b7-b6c9b44c1657"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.138190 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26449592-f0d6-466b-bdcf-ebd15f73bf27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26449592-f0d6-466b-bdcf-ebd15f73bf27" (UID: "26449592-f0d6-466b-bdcf-ebd15f73bf27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.144541 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-kube-api-access-t89bj" (OuterVolumeSpecName: "kube-api-access-t89bj") pod "79b109a7-3cb4-4f58-81b7-b6c9b44c1657" (UID: "79b109a7-3cb4-4f58-81b7-b6c9b44c1657"). InnerVolumeSpecName "kube-api-access-t89bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.158530 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26449592-f0d6-466b-bdcf-ebd15f73bf27-kube-api-access-lqdxk" (OuterVolumeSpecName: "kube-api-access-lqdxk") pod "26449592-f0d6-466b-bdcf-ebd15f73bf27" (UID: "26449592-f0d6-466b-bdcf-ebd15f73bf27"). InnerVolumeSpecName "kube-api-access-lqdxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.239526 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.239565 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26449592-f0d6-466b-bdcf-ebd15f73bf27-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.239577 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t89bj\" (UniqueName: \"kubernetes.io/projected/79b109a7-3cb4-4f58-81b7-b6c9b44c1657-kube-api-access-t89bj\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.239593 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqdxk\" (UniqueName: \"kubernetes.io/projected/26449592-f0d6-466b-bdcf-ebd15f73bf27-kube-api-access-lqdxk\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.520106 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4b15-account-create-67t6s" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.520098 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4b15-account-create-67t6s" event={"ID":"26449592-f0d6-466b-bdcf-ebd15f73bf27","Type":"ContainerDied","Data":"093f01a7e8bb9082e931d6c2a3bb7ee8ee0cc657567438d3d787ebf17185c77e"} Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.520237 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="093f01a7e8bb9082e931d6c2a3bb7ee8ee0cc657567438d3d787ebf17185c77e" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.522622 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2lkz7" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.522618 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2lkz7" event={"ID":"79b109a7-3cb4-4f58-81b7-b6c9b44c1657","Type":"ContainerDied","Data":"58380038eb73f74cadcaa3a192eb2bb9efcecb65c7d0ed9e70a03ed94266f369"} Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.523208 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58380038eb73f74cadcaa3a192eb2bb9efcecb65c7d0ed9e70a03ed94266f369" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.863578 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.953385 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-operator-scripts\") pod \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.953942 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4af8f82a-ad6a-401b-880d-cd612c1fd9a6" (UID: "4af8f82a-ad6a-401b-880d-cd612c1fd9a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.954042 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kmk\" (UniqueName: \"kubernetes.io/projected/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-kube-api-access-l2kmk\") pod \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\" (UID: \"4af8f82a-ad6a-401b-880d-cd612c1fd9a6\") " Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.955248 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:29 crc kubenswrapper[5028]: I1123 08:46:29.961888 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-kube-api-access-l2kmk" (OuterVolumeSpecName: "kube-api-access-l2kmk") pod "4af8f82a-ad6a-401b-880d-cd612c1fd9a6" (UID: "4af8f82a-ad6a-401b-880d-cd612c1fd9a6"). InnerVolumeSpecName "kube-api-access-l2kmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.059632 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kmk\" (UniqueName: \"kubernetes.io/projected/4af8f82a-ad6a-401b-880d-cd612c1fd9a6-kube-api-access-l2kmk\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.082559 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.086393 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.093508 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.160406 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-kube-api-access-629th\") pod \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.160490 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478d4566-77d2-4d75-91d0-c66ee03fbbdd-operator-scripts\") pod \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.160611 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95w2g\" (UniqueName: \"kubernetes.io/projected/69ff0ece-c044-4e7b-9efb-83805ed11901-kube-api-access-95w2g\") pod \"69ff0ece-c044-4e7b-9efb-83805ed11901\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.160671 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-operator-scripts\") pod \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\" (UID: \"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2\") " Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.160771 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ff0ece-c044-4e7b-9efb-83805ed11901-operator-scripts\") pod \"69ff0ece-c044-4e7b-9efb-83805ed11901\" (UID: \"69ff0ece-c044-4e7b-9efb-83805ed11901\") " Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.160840 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t2tk\" (UniqueName: \"kubernetes.io/projected/478d4566-77d2-4d75-91d0-c66ee03fbbdd-kube-api-access-7t2tk\") pod \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\" (UID: \"478d4566-77d2-4d75-91d0-c66ee03fbbdd\") " Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.162140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" (UID: "cc3ee139-2d5c-445c-aca1-2b7468c4ffe2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.163967 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/478d4566-77d2-4d75-91d0-c66ee03fbbdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "478d4566-77d2-4d75-91d0-c66ee03fbbdd" (UID: "478d4566-77d2-4d75-91d0-c66ee03fbbdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.164106 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69ff0ece-c044-4e7b-9efb-83805ed11901-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69ff0ece-c044-4e7b-9efb-83805ed11901" (UID: "69ff0ece-c044-4e7b-9efb-83805ed11901"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.165334 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/478d4566-77d2-4d75-91d0-c66ee03fbbdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.165359 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.165371 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69ff0ece-c044-4e7b-9efb-83805ed11901-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.168273 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478d4566-77d2-4d75-91d0-c66ee03fbbdd-kube-api-access-7t2tk" (OuterVolumeSpecName: "kube-api-access-7t2tk") pod "478d4566-77d2-4d75-91d0-c66ee03fbbdd" (UID: "478d4566-77d2-4d75-91d0-c66ee03fbbdd"). InnerVolumeSpecName "kube-api-access-7t2tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.168453 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ff0ece-c044-4e7b-9efb-83805ed11901-kube-api-access-95w2g" (OuterVolumeSpecName: "kube-api-access-95w2g") pod "69ff0ece-c044-4e7b-9efb-83805ed11901" (UID: "69ff0ece-c044-4e7b-9efb-83805ed11901"). InnerVolumeSpecName "kube-api-access-95w2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.176164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-kube-api-access-629th" (OuterVolumeSpecName: "kube-api-access-629th") pod "cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" (UID: "cc3ee139-2d5c-445c-aca1-2b7468c4ffe2"). InnerVolumeSpecName "kube-api-access-629th". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.267234 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-629th\" (UniqueName: \"kubernetes.io/projected/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2-kube-api-access-629th\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.267482 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95w2g\" (UniqueName: \"kubernetes.io/projected/69ff0ece-c044-4e7b-9efb-83805ed11901-kube-api-access-95w2g\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.267540 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t2tk\" (UniqueName: \"kubernetes.io/projected/478d4566-77d2-4d75-91d0-c66ee03fbbdd-kube-api-access-7t2tk\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.539016 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-n8s49" event={"ID":"478d4566-77d2-4d75-91d0-c66ee03fbbdd","Type":"ContainerDied","Data":"d6888ab3c72cad4f68a338ad6946627d6e94bf22ac5e22577a092ec953d77fde"} Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.539084 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6888ab3c72cad4f68a338ad6946627d6e94bf22ac5e22577a092ec953d77fde" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.539160 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-n8s49" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.541466 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ppnb4" event={"ID":"4af8f82a-ad6a-401b-880d-cd612c1fd9a6","Type":"ContainerDied","Data":"bd5956f6f29fc0fe1ac81fa83c8d691e0ed5c196ce0c8c6a8999f92f54a277de"} Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.541512 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5956f6f29fc0fe1ac81fa83c8d691e0ed5c196ce0c8c6a8999f92f54a277de" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.541594 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ppnb4" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.545715 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1148-account-create-78mng" event={"ID":"cc3ee139-2d5c-445c-aca1-2b7468c4ffe2","Type":"ContainerDied","Data":"60d365874ed41d9bd03a64bfaed1b8c974baaf60623a944ea059a8c75e3024a4"} Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.545778 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60d365874ed41d9bd03a64bfaed1b8c974baaf60623a944ea059a8c75e3024a4" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.545797 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1148-account-create-78mng" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.547637 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-283a-account-create-2pzrs" event={"ID":"69ff0ece-c044-4e7b-9efb-83805ed11901","Type":"ContainerDied","Data":"f416caf2860be9de6a4968641a4866c387f21b38ca94c74a8293d04d6ce4d640"} Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.547677 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f416caf2860be9de6a4968641a4866c387f21b38ca94c74a8293d04d6ce4d640" Nov 23 08:46:30 crc kubenswrapper[5028]: I1123 08:46:30.547777 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-283a-account-create-2pzrs" Nov 23 08:46:32 crc kubenswrapper[5028]: I1123 08:46:32.046363 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmxxv" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" probeResult="failure" output=< Nov 23 08:46:32 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 08:46:32 crc kubenswrapper[5028]: > Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.728998 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dfhjk"] Nov 23 08:46:35 crc kubenswrapper[5028]: E1123 08:46:35.730154 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26449592-f0d6-466b-bdcf-ebd15f73bf27" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730169 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="26449592-f0d6-466b-bdcf-ebd15f73bf27" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: E1123 08:46:35.730186 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79b109a7-3cb4-4f58-81b7-b6c9b44c1657" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730192 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="79b109a7-3cb4-4f58-81b7-b6c9b44c1657" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: E1123 08:46:35.730206 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478d4566-77d2-4d75-91d0-c66ee03fbbdd" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730213 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="478d4566-77d2-4d75-91d0-c66ee03fbbdd" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: E1123 08:46:35.730223 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ff0ece-c044-4e7b-9efb-83805ed11901" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730229 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ff0ece-c044-4e7b-9efb-83805ed11901" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: E1123 08:46:35.730246 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af8f82a-ad6a-401b-880d-cd612c1fd9a6" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730252 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af8f82a-ad6a-401b-880d-cd612c1fd9a6" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: E1123 08:46:35.730281 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730287 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730472 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af8f82a-ad6a-401b-880d-cd612c1fd9a6" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730490 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730496 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="478d4566-77d2-4d75-91d0-c66ee03fbbdd" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730508 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ff0ece-c044-4e7b-9efb-83805ed11901" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730522 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="26449592-f0d6-466b-bdcf-ebd15f73bf27" containerName="mariadb-account-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.730535 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="79b109a7-3cb4-4f58-81b7-b6c9b44c1657" containerName="mariadb-database-create" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.731187 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.737660 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.737920 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.738445 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v7kzp" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.745237 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dfhjk"] Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.826800 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-scripts\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.826917 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.827055 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzp5\" (UniqueName: \"kubernetes.io/projected/e38acb26-f8ac-427f-87bb-2497523de298-kube-api-access-4fzp5\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.827166 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-config-data\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.928561 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-config-data\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.928719 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-scripts\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.928799 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.928869 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fzp5\" (UniqueName: \"kubernetes.io/projected/e38acb26-f8ac-427f-87bb-2497523de298-kube-api-access-4fzp5\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.936885 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-scripts\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.937065 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.940365 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-config-data\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:35 crc kubenswrapper[5028]: I1123 08:46:35.953520 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fzp5\" (UniqueName: \"kubernetes.io/projected/e38acb26-f8ac-427f-87bb-2497523de298-kube-api-access-4fzp5\") pod \"nova-cell0-conductor-db-sync-dfhjk\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:36 crc kubenswrapper[5028]: I1123 08:46:36.076512 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:36 crc kubenswrapper[5028]: I1123 08:46:36.554208 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dfhjk"] Nov 23 08:46:36 crc kubenswrapper[5028]: I1123 08:46:36.650023 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" event={"ID":"e38acb26-f8ac-427f-87bb-2497523de298","Type":"ContainerStarted","Data":"7dada04150cb7c7b9b1280708ab121beda16a3900c7a0cb68d0d76989c77d312"} Nov 23 08:46:41 crc kubenswrapper[5028]: I1123 08:46:41.054207 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:46:41 crc kubenswrapper[5028]: E1123 08:46:41.055594 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:46:42 crc kubenswrapper[5028]: I1123 08:46:42.054456 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rmxxv" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" probeResult="failure" output=< Nov 23 08:46:42 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 08:46:42 crc kubenswrapper[5028]: > Nov 23 08:46:46 crc kubenswrapper[5028]: I1123 08:46:46.762138 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" event={"ID":"e38acb26-f8ac-427f-87bb-2497523de298","Type":"ContainerStarted","Data":"13e702adb507ae7ec828300737f9c0c487efdae5c8c9e83def21cbfba272f154"} Nov 23 08:46:46 crc kubenswrapper[5028]: I1123 08:46:46.785380 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" podStartSLOduration=2.515960044 podStartE2EDuration="11.785349346s" podCreationTimestamp="2025-11-23 08:46:35 +0000 UTC" firstStartedPulling="2025-11-23 08:46:36.568849544 +0000 UTC m=+6980.266254363" lastFinishedPulling="2025-11-23 08:46:45.838238876 +0000 UTC m=+6989.535643665" observedRunningTime="2025-11-23 08:46:46.77739527 +0000 UTC m=+6990.474800069" watchObservedRunningTime="2025-11-23 08:46:46.785349346 +0000 UTC m=+6990.482754125" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.307080 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phrv2"] Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.310230 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.344156 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phrv2"] Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.486308 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6672d\" (UniqueName: \"kubernetes.io/projected/88beafde-d053-4862-83df-c20db8d6dfd4-kube-api-access-6672d\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.486693 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-utilities\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.486899 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-catalog-content\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.588988 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-catalog-content\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.589143 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6672d\" (UniqueName: \"kubernetes.io/projected/88beafde-d053-4862-83df-c20db8d6dfd4-kube-api-access-6672d\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.589187 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-utilities\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.589718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-catalog-content\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.589850 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-utilities\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.616491 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6672d\" (UniqueName: \"kubernetes.io/projected/88beafde-d053-4862-83df-c20db8d6dfd4-kube-api-access-6672d\") pod \"certified-operators-phrv2\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:47 crc kubenswrapper[5028]: I1123 08:46:47.645053 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:48 crc kubenswrapper[5028]: I1123 08:46:48.203737 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phrv2"] Nov 23 08:46:48 crc kubenswrapper[5028]: W1123 08:46:48.215201 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88beafde_d053_4862_83df_c20db8d6dfd4.slice/crio-12c8e2913f7342ae83e422d1efbdea65fb0ab27465935c224857e46a505da56a WatchSource:0}: Error finding container 12c8e2913f7342ae83e422d1efbdea65fb0ab27465935c224857e46a505da56a: Status 404 returned error can't find the container with id 12c8e2913f7342ae83e422d1efbdea65fb0ab27465935c224857e46a505da56a Nov 23 08:46:48 crc kubenswrapper[5028]: I1123 08:46:48.803349 5028 generic.go:334] "Generic (PLEG): container finished" podID="88beafde-d053-4862-83df-c20db8d6dfd4" containerID="795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637" exitCode=0 Nov 23 08:46:48 crc kubenswrapper[5028]: I1123 08:46:48.803392 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerDied","Data":"795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637"} Nov 23 08:46:48 crc kubenswrapper[5028]: I1123 08:46:48.803729 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerStarted","Data":"12c8e2913f7342ae83e422d1efbdea65fb0ab27465935c224857e46a505da56a"} Nov 23 08:46:49 crc kubenswrapper[5028]: I1123 08:46:49.813162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerStarted","Data":"8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b"} Nov 23 08:46:50 crc kubenswrapper[5028]: I1123 08:46:50.825613 5028 generic.go:334] "Generic (PLEG): container finished" podID="88beafde-d053-4862-83df-c20db8d6dfd4" containerID="8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b" exitCode=0 Nov 23 08:46:50 crc kubenswrapper[5028]: I1123 08:46:50.825719 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerDied","Data":"8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b"} Nov 23 08:46:51 crc kubenswrapper[5028]: I1123 08:46:51.072619 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:51 crc kubenswrapper[5028]: I1123 08:46:51.123605 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:51 crc kubenswrapper[5028]: I1123 08:46:51.843351 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerStarted","Data":"66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d"} Nov 23 08:46:52 crc kubenswrapper[5028]: I1123 08:46:52.855584 5028 generic.go:334] "Generic (PLEG): container finished" podID="e38acb26-f8ac-427f-87bb-2497523de298" containerID="13e702adb507ae7ec828300737f9c0c487efdae5c8c9e83def21cbfba272f154" exitCode=0 Nov 23 08:46:52 crc kubenswrapper[5028]: I1123 08:46:52.855620 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" event={"ID":"e38acb26-f8ac-427f-87bb-2497523de298","Type":"ContainerDied","Data":"13e702adb507ae7ec828300737f9c0c487efdae5c8c9e83def21cbfba272f154"} Nov 23 08:46:52 crc kubenswrapper[5028]: I1123 08:46:52.879588 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phrv2" podStartSLOduration=3.246298625 podStartE2EDuration="5.879564217s" podCreationTimestamp="2025-11-23 08:46:47 +0000 UTC" firstStartedPulling="2025-11-23 08:46:48.806560771 +0000 UTC m=+6992.503965590" lastFinishedPulling="2025-11-23 08:46:51.439826403 +0000 UTC m=+6995.137231182" observedRunningTime="2025-11-23 08:46:51.871577208 +0000 UTC m=+6995.568981997" watchObservedRunningTime="2025-11-23 08:46:52.879564217 +0000 UTC m=+6996.576969006" Nov 23 08:46:53 crc kubenswrapper[5028]: I1123 08:46:53.472461 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmxxv"] Nov 23 08:46:53 crc kubenswrapper[5028]: I1123 08:46:53.472751 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rmxxv" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" containerID="cri-o://78dea01ec8e212a4f7207e1703bc37fb68f2690f12cc3ddd589f27d88a0f9d2d" gracePeriod=2 Nov 23 08:46:53 crc kubenswrapper[5028]: I1123 08:46:53.880398 5028 generic.go:334] "Generic (PLEG): container finished" podID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerID="78dea01ec8e212a4f7207e1703bc37fb68f2690f12cc3ddd589f27d88a0f9d2d" exitCode=0 Nov 23 08:46:53 crc kubenswrapper[5028]: I1123 08:46:53.880585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerDied","Data":"78dea01ec8e212a4f7207e1703bc37fb68f2690f12cc3ddd589f27d88a0f9d2d"} Nov 23 08:46:53 crc kubenswrapper[5028]: I1123 08:46:53.989259 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.120618 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-utilities\") pod \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.120672 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-catalog-content\") pod \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.120770 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnlbl\" (UniqueName: \"kubernetes.io/projected/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-kube-api-access-wnlbl\") pod \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\" (UID: \"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.121736 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-utilities" (OuterVolumeSpecName: "utilities") pod "d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" (UID: "d4eaa620-ce8e-47bb-870b-3165d4d7cc1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.131579 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-kube-api-access-wnlbl" (OuterVolumeSpecName: "kube-api-access-wnlbl") pod "d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" (UID: "d4eaa620-ce8e-47bb-870b-3165d4d7cc1e"). InnerVolumeSpecName "kube-api-access-wnlbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.195747 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.235118 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.235157 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnlbl\" (UniqueName: \"kubernetes.io/projected/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-kube-api-access-wnlbl\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.236681 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" (UID: "d4eaa620-ce8e-47bb-870b-3165d4d7cc1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.357265 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-scripts\") pod \"e38acb26-f8ac-427f-87bb-2497523de298\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.358011 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-config-data\") pod \"e38acb26-f8ac-427f-87bb-2497523de298\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.358642 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fzp5\" (UniqueName: \"kubernetes.io/projected/e38acb26-f8ac-427f-87bb-2497523de298-kube-api-access-4fzp5\") pod \"e38acb26-f8ac-427f-87bb-2497523de298\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.358747 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-combined-ca-bundle\") pod \"e38acb26-f8ac-427f-87bb-2497523de298\" (UID: \"e38acb26-f8ac-427f-87bb-2497523de298\") " Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.359392 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.366198 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e38acb26-f8ac-427f-87bb-2497523de298-kube-api-access-4fzp5" (OuterVolumeSpecName: "kube-api-access-4fzp5") pod "e38acb26-f8ac-427f-87bb-2497523de298" (UID: "e38acb26-f8ac-427f-87bb-2497523de298"). InnerVolumeSpecName "kube-api-access-4fzp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.366340 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-scripts" (OuterVolumeSpecName: "scripts") pod "e38acb26-f8ac-427f-87bb-2497523de298" (UID: "e38acb26-f8ac-427f-87bb-2497523de298"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.389870 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e38acb26-f8ac-427f-87bb-2497523de298" (UID: "e38acb26-f8ac-427f-87bb-2497523de298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.392551 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-config-data" (OuterVolumeSpecName: "config-data") pod "e38acb26-f8ac-427f-87bb-2497523de298" (UID: "e38acb26-f8ac-427f-87bb-2497523de298"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.461740 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fzp5\" (UniqueName: \"kubernetes.io/projected/e38acb26-f8ac-427f-87bb-2497523de298-kube-api-access-4fzp5\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.462363 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.462380 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.462395 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e38acb26-f8ac-427f-87bb-2497523de298-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.898338 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rmxxv" event={"ID":"d4eaa620-ce8e-47bb-870b-3165d4d7cc1e","Type":"ContainerDied","Data":"1f7db5e31598060ec9b43356ae3abd2471ec6949d912e25e106935ebbc33c59f"} Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.898436 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rmxxv" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.899127 5028 scope.go:117] "RemoveContainer" containerID="78dea01ec8e212a4f7207e1703bc37fb68f2690f12cc3ddd589f27d88a0f9d2d" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.901312 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" event={"ID":"e38acb26-f8ac-427f-87bb-2497523de298","Type":"ContainerDied","Data":"7dada04150cb7c7b9b1280708ab121beda16a3900c7a0cb68d0d76989c77d312"} Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.901413 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dfhjk" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.901420 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dada04150cb7c7b9b1280708ab121beda16a3900c7a0cb68d0d76989c77d312" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.932069 5028 scope.go:117] "RemoveContainer" containerID="bad34ab57f5e1a0057fbfbc3a4c58315f1eb594fd83dd8c2929ea76b10c94581" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.958079 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rmxxv"] Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.968009 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rmxxv"] Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.977371 5028 scope.go:117] "RemoveContainer" containerID="f36c9394a22ff1311598539fdf5842cabbdb3c73acdb346c44c9c4eabfaaa896" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.997420 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:46:54 crc kubenswrapper[5028]: E1123 08:46:54.998128 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.998225 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" Nov 23 08:46:54 crc kubenswrapper[5028]: E1123 08:46:54.998317 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="extract-utilities" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.998409 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="extract-utilities" Nov 23 08:46:54 crc kubenswrapper[5028]: E1123 08:46:54.998497 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="extract-content" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.998563 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="extract-content" Nov 23 08:46:54 crc kubenswrapper[5028]: E1123 08:46:54.998632 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e38acb26-f8ac-427f-87bb-2497523de298" containerName="nova-cell0-conductor-db-sync" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.998705 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e38acb26-f8ac-427f-87bb-2497523de298" containerName="nova-cell0-conductor-db-sync" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.998942 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e38acb26-f8ac-427f-87bb-2497523de298" containerName="nova-cell0-conductor-db-sync" Nov 23 08:46:54 crc kubenswrapper[5028]: I1123 08:46:54.999038 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" containerName="registry-server" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:54.999871 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.005245 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.005405 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v7kzp" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.012532 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.068803 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4eaa620-ce8e-47bb-870b-3165d4d7cc1e" path="/var/lib/kubelet/pods/d4eaa620-ce8e-47bb-870b-3165d4d7cc1e/volumes" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.177876 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99hrd\" (UniqueName: \"kubernetes.io/projected/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-kube-api-access-99hrd\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.178500 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.178669 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.281697 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99hrd\" (UniqueName: \"kubernetes.io/projected/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-kube-api-access-99hrd\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.281777 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.281811 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.291917 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.292997 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.306624 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99hrd\" (UniqueName: \"kubernetes.io/projected/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-kube-api-access-99hrd\") pod \"nova-cell0-conductor-0\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.365881 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.801363 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:46:55 crc kubenswrapper[5028]: I1123 08:46:55.913627 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf","Type":"ContainerStarted","Data":"d0ef806f070fb5d313bcdbe1e5f85afbf0e9c7fbd2c37166ed08c8a9d045b93f"} Nov 23 08:46:56 crc kubenswrapper[5028]: I1123 08:46:56.053199 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:46:56 crc kubenswrapper[5028]: E1123 08:46:56.053611 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:46:56 crc kubenswrapper[5028]: I1123 08:46:56.932686 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf","Type":"ContainerStarted","Data":"bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7"} Nov 23 08:46:56 crc kubenswrapper[5028]: I1123 08:46:56.933285 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 08:46:56 crc kubenswrapper[5028]: I1123 08:46:56.959028 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.959008562 podStartE2EDuration="2.959008562s" podCreationTimestamp="2025-11-23 08:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:46:56.954317347 +0000 UTC m=+7000.651722126" watchObservedRunningTime="2025-11-23 08:46:56.959008562 +0000 UTC m=+7000.656413341" Nov 23 08:46:57 crc kubenswrapper[5028]: I1123 08:46:57.645451 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:57 crc kubenswrapper[5028]: I1123 08:46:57.647443 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:57 crc kubenswrapper[5028]: I1123 08:46:57.702047 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:57 crc kubenswrapper[5028]: I1123 08:46:57.984203 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:46:58 crc kubenswrapper[5028]: I1123 08:46:58.874127 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phrv2"] Nov 23 08:46:59 crc kubenswrapper[5028]: I1123 08:46:59.958754 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phrv2" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="registry-server" containerID="cri-o://66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d" gracePeriod=2 Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.441700 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.497156 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-utilities\") pod \"88beafde-d053-4862-83df-c20db8d6dfd4\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.497276 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-catalog-content\") pod \"88beafde-d053-4862-83df-c20db8d6dfd4\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.497317 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6672d\" (UniqueName: \"kubernetes.io/projected/88beafde-d053-4862-83df-c20db8d6dfd4-kube-api-access-6672d\") pod \"88beafde-d053-4862-83df-c20db8d6dfd4\" (UID: \"88beafde-d053-4862-83df-c20db8d6dfd4\") " Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.498505 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-utilities" (OuterVolumeSpecName: "utilities") pod "88beafde-d053-4862-83df-c20db8d6dfd4" (UID: "88beafde-d053-4862-83df-c20db8d6dfd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.504305 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88beafde-d053-4862-83df-c20db8d6dfd4-kube-api-access-6672d" (OuterVolumeSpecName: "kube-api-access-6672d") pod "88beafde-d053-4862-83df-c20db8d6dfd4" (UID: "88beafde-d053-4862-83df-c20db8d6dfd4"). InnerVolumeSpecName "kube-api-access-6672d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.561506 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88beafde-d053-4862-83df-c20db8d6dfd4" (UID: "88beafde-d053-4862-83df-c20db8d6dfd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.599122 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.599165 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88beafde-d053-4862-83df-c20db8d6dfd4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.599178 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6672d\" (UniqueName: \"kubernetes.io/projected/88beafde-d053-4862-83df-c20db8d6dfd4-kube-api-access-6672d\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.972995 5028 generic.go:334] "Generic (PLEG): container finished" podID="88beafde-d053-4862-83df-c20db8d6dfd4" containerID="66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d" exitCode=0 Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.973086 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerDied","Data":"66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d"} Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.973155 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phrv2" event={"ID":"88beafde-d053-4862-83df-c20db8d6dfd4","Type":"ContainerDied","Data":"12c8e2913f7342ae83e422d1efbdea65fb0ab27465935c224857e46a505da56a"} Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.973179 5028 scope.go:117] "RemoveContainer" containerID="66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d" Nov 23 08:47:00 crc kubenswrapper[5028]: I1123 08:47:00.973922 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phrv2" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.009536 5028 scope.go:117] "RemoveContainer" containerID="8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.024252 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phrv2"] Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.031484 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phrv2"] Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.034799 5028 scope.go:117] "RemoveContainer" containerID="795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.067782 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" path="/var/lib/kubelet/pods/88beafde-d053-4862-83df-c20db8d6dfd4/volumes" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.072113 5028 scope.go:117] "RemoveContainer" containerID="66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d" Nov 23 08:47:01 crc kubenswrapper[5028]: E1123 08:47:01.072546 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d\": container with ID starting with 66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d not found: ID does not exist" containerID="66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.072578 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d"} err="failed to get container status \"66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d\": rpc error: code = NotFound desc = could not find container \"66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d\": container with ID starting with 66df4f1885cb6e1f1ff76edf0f550b6a91ac8f5e71833749afe0e6c134b92e2d not found: ID does not exist" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.072602 5028 scope.go:117] "RemoveContainer" containerID="8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b" Nov 23 08:47:01 crc kubenswrapper[5028]: E1123 08:47:01.073219 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b\": container with ID starting with 8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b not found: ID does not exist" containerID="8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.073240 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b"} err="failed to get container status \"8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b\": rpc error: code = NotFound desc = could not find container \"8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b\": container with ID starting with 8c601cd8dc0c789391b91f3efd74431ac718a8c174f6c4168c7e7c214079512b not found: ID does not exist" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.073253 5028 scope.go:117] "RemoveContainer" containerID="795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637" Nov 23 08:47:01 crc kubenswrapper[5028]: E1123 08:47:01.073637 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637\": container with ID starting with 795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637 not found: ID does not exist" containerID="795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637" Nov 23 08:47:01 crc kubenswrapper[5028]: I1123 08:47:01.073705 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637"} err="failed to get container status \"795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637\": rpc error: code = NotFound desc = could not find container \"795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637\": container with ID starting with 795e1e2d73942040a6441732b73549ef6b36e3754ddf85d5c969f4d907690637 not found: ID does not exist" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.400327 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.902872 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-72g9n"] Nov 23 08:47:05 crc kubenswrapper[5028]: E1123 08:47:05.903441 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="extract-utilities" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.903481 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="extract-utilities" Nov 23 08:47:05 crc kubenswrapper[5028]: E1123 08:47:05.903496 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="extract-content" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.903505 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="extract-content" Nov 23 08:47:05 crc kubenswrapper[5028]: E1123 08:47:05.903519 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="registry-server" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.903528 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="registry-server" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.903790 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="88beafde-d053-4862-83df-c20db8d6dfd4" containerName="registry-server" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.904710 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.908553 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.908682 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.912519 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-72g9n"] Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.961224 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-scripts\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.961800 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.961863 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-config-data\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:05 crc kubenswrapper[5028]: I1123 08:47:05.961990 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pgz\" (UniqueName: \"kubernetes.io/projected/04d789bd-7151-47af-b379-89152fb07d3d-kube-api-access-s2pgz\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.072262 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-scripts\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.072343 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.072373 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-config-data\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.072432 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pgz\" (UniqueName: \"kubernetes.io/projected/04d789bd-7151-47af-b379-89152fb07d3d-kube-api-access-s2pgz\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.073050 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.074694 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.081050 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-config-data\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.084399 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.098534 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.106329 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.122651 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-scripts\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.175714 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pgz\" (UniqueName: \"kubernetes.io/projected/04d789bd-7151-47af-b379-89152fb07d3d-kube-api-access-s2pgz\") pod \"nova-cell0-cell-mapping-72g9n\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.184691 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc902f5-b148-497d-9f9d-d15987f4e2bb-logs\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.184771 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.185079 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-config-data\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.198518 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp47\" (UniqueName: \"kubernetes.io/projected/bbc902f5-b148-497d-9f9d-d15987f4e2bb-kube-api-access-qvp47\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.227691 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.254525 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.256905 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.262636 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.284995 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.300245 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp47\" (UniqueName: \"kubernetes.io/projected/bbc902f5-b148-497d-9f9d-d15987f4e2bb-kube-api-access-qvp47\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.300318 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc902f5-b148-497d-9f9d-d15987f4e2bb-logs\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.300341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.300376 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-config-data\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.309090 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc902f5-b148-497d-9f9d-d15987f4e2bb-logs\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.312364 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.313881 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.317903 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.340814 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.341704 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-config-data\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.363573 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp47\" (UniqueName: \"kubernetes.io/projected/bbc902f5-b148-497d-9f9d-d15987f4e2bb-kube-api-access-qvp47\") pod \"nova-api-0\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.369056 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.384367 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-697f8b7d5c-v5mpz"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.401981 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.402028 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lvzl\" (UniqueName: \"kubernetes.io/projected/3398c486-6d5f-49ac-8680-7e8e828665bd-kube-api-access-8lvzl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.402351 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.403422 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-config-data\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.403612 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.403765 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2884315-d4ef-4201-8391-b3954ea2c686-logs\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.404008 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjclb\" (UniqueName: \"kubernetes.io/projected/d2884315-d4ef-4201-8391-b3954ea2c686-kube-api-access-rjclb\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.416793 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.424769 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697f8b7d5c-v5mpz"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.466894 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.468320 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.475828 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.495868 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.512845 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.512910 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-dns-svc\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.512940 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-sb\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513004 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-config-data\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513085 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513116 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2884315-d4ef-4201-8391-b3954ea2c686-logs\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513139 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-nb\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513187 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjclb\" (UniqueName: \"kubernetes.io/projected/d2884315-d4ef-4201-8391-b3954ea2c686-kube-api-access-rjclb\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513250 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513280 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lvzl\" (UniqueName: \"kubernetes.io/projected/3398c486-6d5f-49ac-8680-7e8e828665bd-kube-api-access-8lvzl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513300 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-config\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.513324 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbq6t\" (UniqueName: \"kubernetes.io/projected/122a1da8-80b6-47bf-baf6-1cd7010c8bab-kube-api-access-rbq6t\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.514526 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2884315-d4ef-4201-8391-b3954ea2c686-logs\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.524771 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-config-data\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.525389 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.525428 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.526087 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.532365 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.543562 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lvzl\" (UniqueName: \"kubernetes.io/projected/3398c486-6d5f-49ac-8680-7e8e828665bd-kube-api-access-8lvzl\") pod \"nova-cell1-novncproxy-0\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.547586 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjclb\" (UniqueName: \"kubernetes.io/projected/d2884315-d4ef-4201-8391-b3954ea2c686-kube-api-access-rjclb\") pod \"nova-metadata-0\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.615497 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6k8l\" (UniqueName: \"kubernetes.io/projected/6f56a33c-58f3-4870-83d2-b98624f85285-kube-api-access-r6k8l\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.615645 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.615714 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-dns-svc\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.615819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-sb\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.616011 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-nb\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.616119 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-config\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.616169 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-config-data\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.616196 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbq6t\" (UniqueName: \"kubernetes.io/projected/122a1da8-80b6-47bf-baf6-1cd7010c8bab-kube-api-access-rbq6t\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.616973 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-dns-svc\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.616973 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-sb\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.617088 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-nb\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.617257 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-config\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.636098 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbq6t\" (UniqueName: \"kubernetes.io/projected/122a1da8-80b6-47bf-baf6-1cd7010c8bab-kube-api-access-rbq6t\") pod \"dnsmasq-dns-697f8b7d5c-v5mpz\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.716178 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.717217 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-config-data\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.717284 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6k8l\" (UniqueName: \"kubernetes.io/projected/6f56a33c-58f3-4870-83d2-b98624f85285-kube-api-access-r6k8l\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.717347 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.721309 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.724365 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-config-data\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.728355 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.741597 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6k8l\" (UniqueName: \"kubernetes.io/projected/6f56a33c-58f3-4870-83d2-b98624f85285-kube-api-access-r6k8l\") pod \"nova-scheduler-0\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.750928 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.799687 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:06 crc kubenswrapper[5028]: I1123 08:47:06.944100 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-72g9n"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.068426 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:47:07 crc kubenswrapper[5028]: E1123 08:47:07.068695 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.086061 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-72g9n" event={"ID":"04d789bd-7151-47af-b379-89152fb07d3d","Type":"ContainerStarted","Data":"e90458c4ebee06126a59910bcf53944089fe1c0ee505be5e4d5eb26bc399e605"} Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.102549 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.217852 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vmc6m"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.219439 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.224806 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.225745 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.228416 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vmc6m"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.259346 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.327410 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.330601 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-scripts\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.330703 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.330741 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5z9l\" (UniqueName: \"kubernetes.io/projected/956c5566-a9a2-4f35-a210-39618d9c332d-kube-api-access-x5z9l\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.330761 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-config-data\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: W1123 08:47:07.334396 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2884315_d4ef_4201_8391_b3954ea2c686.slice/crio-5f2bb82e068f30678c095c636ab812affef513d39e75cc4ec2dddbf8146c78f5 WatchSource:0}: Error finding container 5f2bb82e068f30678c095c636ab812affef513d39e75cc4ec2dddbf8146c78f5: Status 404 returned error can't find the container with id 5f2bb82e068f30678c095c636ab812affef513d39e75cc4ec2dddbf8146c78f5 Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.433262 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-scripts\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.433852 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.433895 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5z9l\" (UniqueName: \"kubernetes.io/projected/956c5566-a9a2-4f35-a210-39618d9c332d-kube-api-access-x5z9l\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.433915 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-config-data\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.440282 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-scripts\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.449239 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-config-data\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.450705 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.465711 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5z9l\" (UniqueName: \"kubernetes.io/projected/956c5566-a9a2-4f35-a210-39618d9c332d-kube-api-access-x5z9l\") pod \"nova-cell1-conductor-db-sync-vmc6m\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:07 crc kubenswrapper[5028]: W1123 08:47:07.477512 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f56a33c_58f3_4870_83d2_b98624f85285.slice/crio-7a91163a2e8f2eef91b39b3041340d232ec34e75f23c2871ebd2471ee036d7f9 WatchSource:0}: Error finding container 7a91163a2e8f2eef91b39b3041340d232ec34e75f23c2871ebd2471ee036d7f9: Status 404 returned error can't find the container with id 7a91163a2e8f2eef91b39b3041340d232ec34e75f23c2871ebd2471ee036d7f9 Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.479637 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-697f8b7d5c-v5mpz"] Nov 23 08:47:07 crc kubenswrapper[5028]: W1123 08:47:07.480602 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod122a1da8_80b6_47bf_baf6_1cd7010c8bab.slice/crio-e9eaf234780b974b0d08b8e0ad26ac4a9de332817333d7feecfbf1e14ba58181 WatchSource:0}: Error finding container e9eaf234780b974b0d08b8e0ad26ac4a9de332817333d7feecfbf1e14ba58181: Status 404 returned error can't find the container with id e9eaf234780b974b0d08b8e0ad26ac4a9de332817333d7feecfbf1e14ba58181 Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.498510 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:07 crc kubenswrapper[5028]: I1123 08:47:07.549974 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.078577 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vmc6m"] Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.139889 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2884315-d4ef-4201-8391-b3954ea2c686","Type":"ContainerStarted","Data":"5f2bb82e068f30678c095c636ab812affef513d39e75cc4ec2dddbf8146c78f5"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.143059 5028 generic.go:334] "Generic (PLEG): container finished" podID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerID="6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701" exitCode=0 Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.143175 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" event={"ID":"122a1da8-80b6-47bf-baf6-1cd7010c8bab","Type":"ContainerDied","Data":"6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.143297 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" event={"ID":"122a1da8-80b6-47bf-baf6-1cd7010c8bab","Type":"ContainerStarted","Data":"e9eaf234780b974b0d08b8e0ad26ac4a9de332817333d7feecfbf1e14ba58181"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.149740 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f56a33c-58f3-4870-83d2-b98624f85285","Type":"ContainerStarted","Data":"7a91163a2e8f2eef91b39b3041340d232ec34e75f23c2871ebd2471ee036d7f9"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.153258 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3398c486-6d5f-49ac-8680-7e8e828665bd","Type":"ContainerStarted","Data":"bb09e530b831528aca78881869973d0a9b5ff56cbe09d2c900aed19501e4cfce"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.155192 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-72g9n" event={"ID":"04d789bd-7151-47af-b379-89152fb07d3d","Type":"ContainerStarted","Data":"4d4c04555c3dc7ff80a818f6dbdd0320e99df63b483544d45d14a2ac017ea56a"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.161430 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbc902f5-b148-497d-9f9d-d15987f4e2bb","Type":"ContainerStarted","Data":"641ae134d356c3a060148a00e593785ff009e442131d7e563a39ec11a9116cad"} Nov 23 08:47:08 crc kubenswrapper[5028]: I1123 08:47:08.225742 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-72g9n" podStartSLOduration=3.225717373 podStartE2EDuration="3.225717373s" podCreationTimestamp="2025-11-23 08:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:08.188723021 +0000 UTC m=+7011.886127800" watchObservedRunningTime="2025-11-23 08:47:08.225717373 +0000 UTC m=+7011.923122152" Nov 23 08:47:09 crc kubenswrapper[5028]: W1123 08:47:09.348332 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod956c5566_a9a2_4f35_a210_39618d9c332d.slice/crio-7dfa2cfd6d065ed47221e597a0385f1c61961f11a7ae014143a8ff6ba7061571 WatchSource:0}: Error finding container 7dfa2cfd6d065ed47221e597a0385f1c61961f11a7ae014143a8ff6ba7061571: Status 404 returned error can't find the container with id 7dfa2cfd6d065ed47221e597a0385f1c61961f11a7ae014143a8ff6ba7061571 Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.214443 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" event={"ID":"122a1da8-80b6-47bf-baf6-1cd7010c8bab","Type":"ContainerStarted","Data":"5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad"} Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.215856 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.231448 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" event={"ID":"956c5566-a9a2-4f35-a210-39618d9c332d","Type":"ContainerStarted","Data":"7dfa2cfd6d065ed47221e597a0385f1c61961f11a7ae014143a8ff6ba7061571"} Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.239853 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbc902f5-b148-497d-9f9d-d15987f4e2bb","Type":"ContainerStarted","Data":"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3"} Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.242016 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" podStartSLOduration=4.241984487 podStartE2EDuration="4.241984487s" podCreationTimestamp="2025-11-23 08:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:10.239198678 +0000 UTC m=+7013.936603477" watchObservedRunningTime="2025-11-23 08:47:10.241984487 +0000 UTC m=+7013.939389276" Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.256298 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2884315-d4ef-4201-8391-b3954ea2c686","Type":"ContainerStarted","Data":"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5"} Nov 23 08:47:10 crc kubenswrapper[5028]: I1123 08:47:10.265237 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" podStartSLOduration=3.265205929 podStartE2EDuration="3.265205929s" podCreationTimestamp="2025-11-23 08:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:10.258901143 +0000 UTC m=+7013.956305922" watchObservedRunningTime="2025-11-23 08:47:10.265205929 +0000 UTC m=+7013.962610708" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.275238 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2884315-d4ef-4201-8391-b3954ea2c686","Type":"ContainerStarted","Data":"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c"} Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.281300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f56a33c-58f3-4870-83d2-b98624f85285","Type":"ContainerStarted","Data":"e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c"} Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.283648 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3398c486-6d5f-49ac-8680-7e8e828665bd","Type":"ContainerStarted","Data":"81c1e9d06f0831130856219d79b415e16dccfaf2ddd76c004d3fdf9e396b23c6"} Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.286698 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" event={"ID":"956c5566-a9a2-4f35-a210-39618d9c332d","Type":"ContainerStarted","Data":"18ca84e14b8b10ed175cb21ca70cfbab33e88f34a813f3b2898e853ed8e54b67"} Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.296887 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbc902f5-b148-497d-9f9d-d15987f4e2bb","Type":"ContainerStarted","Data":"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54"} Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.322081 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.721360791 podStartE2EDuration="5.322057111s" podCreationTimestamp="2025-11-23 08:47:06 +0000 UTC" firstStartedPulling="2025-11-23 08:47:07.339604056 +0000 UTC m=+7011.037008835" lastFinishedPulling="2025-11-23 08:47:09.940300376 +0000 UTC m=+7013.637705155" observedRunningTime="2025-11-23 08:47:11.308739403 +0000 UTC m=+7015.006144192" watchObservedRunningTime="2025-11-23 08:47:11.322057111 +0000 UTC m=+7015.019461890" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.342501 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.539905602 podStartE2EDuration="5.342477594s" podCreationTimestamp="2025-11-23 08:47:06 +0000 UTC" firstStartedPulling="2025-11-23 08:47:07.138244366 +0000 UTC m=+7010.835649145" lastFinishedPulling="2025-11-23 08:47:09.940816358 +0000 UTC m=+7013.638221137" observedRunningTime="2025-11-23 08:47:11.330249793 +0000 UTC m=+7015.027654572" watchObservedRunningTime="2025-11-23 08:47:11.342477594 +0000 UTC m=+7015.039882373" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.358751 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.918857117 podStartE2EDuration="5.358723615s" podCreationTimestamp="2025-11-23 08:47:06 +0000 UTC" firstStartedPulling="2025-11-23 08:47:07.48022698 +0000 UTC m=+7011.177631759" lastFinishedPulling="2025-11-23 08:47:09.920093478 +0000 UTC m=+7013.617498257" observedRunningTime="2025-11-23 08:47:11.352983343 +0000 UTC m=+7015.050388132" watchObservedRunningTime="2025-11-23 08:47:11.358723615 +0000 UTC m=+7015.056128414" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.380666 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.7080694640000003 podStartE2EDuration="5.380645064s" podCreationTimestamp="2025-11-23 08:47:06 +0000 UTC" firstStartedPulling="2025-11-23 08:47:07.268125125 +0000 UTC m=+7010.965529914" lastFinishedPulling="2025-11-23 08:47:09.940700735 +0000 UTC m=+7013.638105514" observedRunningTime="2025-11-23 08:47:11.37439085 +0000 UTC m=+7015.071795639" watchObservedRunningTime="2025-11-23 08:47:11.380645064 +0000 UTC m=+7015.078049843" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.717231 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.717606 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.729817 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:11 crc kubenswrapper[5028]: I1123 08:47:11.800101 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:47:13 crc kubenswrapper[5028]: I1123 08:47:13.320810 5028 generic.go:334] "Generic (PLEG): container finished" podID="956c5566-a9a2-4f35-a210-39618d9c332d" containerID="18ca84e14b8b10ed175cb21ca70cfbab33e88f34a813f3b2898e853ed8e54b67" exitCode=0 Nov 23 08:47:13 crc kubenswrapper[5028]: I1123 08:47:13.320901 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" event={"ID":"956c5566-a9a2-4f35-a210-39618d9c332d","Type":"ContainerDied","Data":"18ca84e14b8b10ed175cb21ca70cfbab33e88f34a813f3b2898e853ed8e54b67"} Nov 23 08:47:13 crc kubenswrapper[5028]: I1123 08:47:13.323555 5028 generic.go:334] "Generic (PLEG): container finished" podID="04d789bd-7151-47af-b379-89152fb07d3d" containerID="4d4c04555c3dc7ff80a818f6dbdd0320e99df63b483544d45d14a2ac017ea56a" exitCode=0 Nov 23 08:47:13 crc kubenswrapper[5028]: I1123 08:47:13.323732 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-72g9n" event={"ID":"04d789bd-7151-47af-b379-89152fb07d3d","Type":"ContainerDied","Data":"4d4c04555c3dc7ff80a818f6dbdd0320e99df63b483544d45d14a2ac017ea56a"} Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.814137 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.821578 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.827199 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-combined-ca-bundle\") pod \"956c5566-a9a2-4f35-a210-39618d9c332d\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.827345 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-scripts\") pod \"956c5566-a9a2-4f35-a210-39618d9c332d\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.827466 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5z9l\" (UniqueName: \"kubernetes.io/projected/956c5566-a9a2-4f35-a210-39618d9c332d-kube-api-access-x5z9l\") pod \"956c5566-a9a2-4f35-a210-39618d9c332d\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.827552 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-config-data\") pod \"956c5566-a9a2-4f35-a210-39618d9c332d\" (UID: \"956c5566-a9a2-4f35-a210-39618d9c332d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.837140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-scripts" (OuterVolumeSpecName: "scripts") pod "956c5566-a9a2-4f35-a210-39618d9c332d" (UID: "956c5566-a9a2-4f35-a210-39618d9c332d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.839160 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956c5566-a9a2-4f35-a210-39618d9c332d-kube-api-access-x5z9l" (OuterVolumeSpecName: "kube-api-access-x5z9l") pod "956c5566-a9a2-4f35-a210-39618d9c332d" (UID: "956c5566-a9a2-4f35-a210-39618d9c332d"). InnerVolumeSpecName "kube-api-access-x5z9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.861324 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "956c5566-a9a2-4f35-a210-39618d9c332d" (UID: "956c5566-a9a2-4f35-a210-39618d9c332d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.890090 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-config-data" (OuterVolumeSpecName: "config-data") pod "956c5566-a9a2-4f35-a210-39618d9c332d" (UID: "956c5566-a9a2-4f35-a210-39618d9c332d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.929327 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-combined-ca-bundle\") pod \"04d789bd-7151-47af-b379-89152fb07d3d\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.929972 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-config-data\") pod \"04d789bd-7151-47af-b379-89152fb07d3d\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.930077 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2pgz\" (UniqueName: \"kubernetes.io/projected/04d789bd-7151-47af-b379-89152fb07d3d-kube-api-access-s2pgz\") pod \"04d789bd-7151-47af-b379-89152fb07d3d\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.930415 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-scripts\") pod \"04d789bd-7151-47af-b379-89152fb07d3d\" (UID: \"04d789bd-7151-47af-b379-89152fb07d3d\") " Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.930995 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.931023 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.931034 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5z9l\" (UniqueName: \"kubernetes.io/projected/956c5566-a9a2-4f35-a210-39618d9c332d-kube-api-access-x5z9l\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.931048 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/956c5566-a9a2-4f35-a210-39618d9c332d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.933044 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-scripts" (OuterVolumeSpecName: "scripts") pod "04d789bd-7151-47af-b379-89152fb07d3d" (UID: "04d789bd-7151-47af-b379-89152fb07d3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.934218 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d789bd-7151-47af-b379-89152fb07d3d-kube-api-access-s2pgz" (OuterVolumeSpecName: "kube-api-access-s2pgz") pod "04d789bd-7151-47af-b379-89152fb07d3d" (UID: "04d789bd-7151-47af-b379-89152fb07d3d"). InnerVolumeSpecName "kube-api-access-s2pgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.957355 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04d789bd-7151-47af-b379-89152fb07d3d" (UID: "04d789bd-7151-47af-b379-89152fb07d3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:14 crc kubenswrapper[5028]: I1123 08:47:14.965196 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-config-data" (OuterVolumeSpecName: "config-data") pod "04d789bd-7151-47af-b379-89152fb07d3d" (UID: "04d789bd-7151-47af-b379-89152fb07d3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.032608 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.032665 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.032681 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d789bd-7151-47af-b379-89152fb07d3d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.032694 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2pgz\" (UniqueName: \"kubernetes.io/projected/04d789bd-7151-47af-b379-89152fb07d3d-kube-api-access-s2pgz\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.347490 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.347531 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vmc6m" event={"ID":"956c5566-a9a2-4f35-a210-39618d9c332d","Type":"ContainerDied","Data":"7dfa2cfd6d065ed47221e597a0385f1c61961f11a7ae014143a8ff6ba7061571"} Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.347607 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dfa2cfd6d065ed47221e597a0385f1c61961f11a7ae014143a8ff6ba7061571" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.349633 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-72g9n" event={"ID":"04d789bd-7151-47af-b379-89152fb07d3d","Type":"ContainerDied","Data":"e90458c4ebee06126a59910bcf53944089fe1c0ee505be5e4d5eb26bc399e605"} Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.349683 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90458c4ebee06126a59910bcf53944089fe1c0ee505be5e4d5eb26bc399e605" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.349658 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-72g9n" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.433105 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:47:15 crc kubenswrapper[5028]: E1123 08:47:15.433494 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956c5566-a9a2-4f35-a210-39618d9c332d" containerName="nova-cell1-conductor-db-sync" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.433513 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="956c5566-a9a2-4f35-a210-39618d9c332d" containerName="nova-cell1-conductor-db-sync" Nov 23 08:47:15 crc kubenswrapper[5028]: E1123 08:47:15.433526 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d789bd-7151-47af-b379-89152fb07d3d" containerName="nova-manage" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.433533 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d789bd-7151-47af-b379-89152fb07d3d" containerName="nova-manage" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.433702 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d789bd-7151-47af-b379-89152fb07d3d" containerName="nova-manage" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.433731 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="956c5566-a9a2-4f35-a210-39618d9c332d" containerName="nova-cell1-conductor-db-sync" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.434460 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.459017 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.465048 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.554927 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.555468 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.555672 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nxn\" (UniqueName: \"kubernetes.io/projected/500cdf4f-8422-4fe1-942e-6db6cbcbed60-kube-api-access-q8nxn\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.594736 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.595312 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-api" containerID="cri-o://b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54" gracePeriod=30 Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.595109 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-log" containerID="cri-o://76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3" gracePeriod=30 Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.615618 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.616176 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6f56a33c-58f3-4870-83d2-b98624f85285" containerName="nova-scheduler-scheduler" containerID="cri-o://e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c" gracePeriod=30 Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.625184 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.625491 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-log" containerID="cri-o://3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5" gracePeriod=30 Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.625685 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-metadata" containerID="cri-o://f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c" gracePeriod=30 Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.659181 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.659301 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.659370 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nxn\" (UniqueName: \"kubernetes.io/projected/500cdf4f-8422-4fe1-942e-6db6cbcbed60-kube-api-access-q8nxn\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.665548 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.667798 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.692234 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nxn\" (UniqueName: \"kubernetes.io/projected/500cdf4f-8422-4fe1-942e-6db6cbcbed60-kube-api-access-q8nxn\") pod \"nova-cell1-conductor-0\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: I1123 08:47:15.772341 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:15 crc kubenswrapper[5028]: E1123 08:47:15.799599 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc902f5_b148_497d_9f9d_d15987f4e2bb.slice/crio-76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbc902f5_b148_497d_9f9d_d15987f4e2bb.slice/crio-conmon-76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2884315_d4ef_4201_8391_b3954ea2c686.slice/crio-3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5.scope\": RecentStats: unable to find data in memory cache]" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.230780 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.270524 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-config-data\") pod \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.271190 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc902f5-b148-497d-9f9d-d15987f4e2bb-logs\") pod \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.271238 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-combined-ca-bundle\") pod \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.271273 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvp47\" (UniqueName: \"kubernetes.io/projected/bbc902f5-b148-497d-9f9d-d15987f4e2bb-kube-api-access-qvp47\") pod \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\" (UID: \"bbc902f5-b148-497d-9f9d-d15987f4e2bb\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.272318 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbc902f5-b148-497d-9f9d-d15987f4e2bb-logs" (OuterVolumeSpecName: "logs") pod "bbc902f5-b148-497d-9f9d-d15987f4e2bb" (UID: "bbc902f5-b148-497d-9f9d-d15987f4e2bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.275528 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.279516 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc902f5-b148-497d-9f9d-d15987f4e2bb-kube-api-access-qvp47" (OuterVolumeSpecName: "kube-api-access-qvp47") pod "bbc902f5-b148-497d-9f9d-d15987f4e2bb" (UID: "bbc902f5-b148-497d-9f9d-d15987f4e2bb"). InnerVolumeSpecName "kube-api-access-qvp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.306698 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-config-data" (OuterVolumeSpecName: "config-data") pod "bbc902f5-b148-497d-9f9d-d15987f4e2bb" (UID: "bbc902f5-b148-497d-9f9d-d15987f4e2bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.311352 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbc902f5-b148-497d-9f9d-d15987f4e2bb" (UID: "bbc902f5-b148-497d-9f9d-d15987f4e2bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362529 5028 generic.go:334] "Generic (PLEG): container finished" podID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerID="b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54" exitCode=0 Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362576 5028 generic.go:334] "Generic (PLEG): container finished" podID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerID="76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3" exitCode=143 Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362648 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbc902f5-b148-497d-9f9d-d15987f4e2bb","Type":"ContainerDied","Data":"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54"} Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbc902f5-b148-497d-9f9d-d15987f4e2bb","Type":"ContainerDied","Data":"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3"} Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362705 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbc902f5-b148-497d-9f9d-d15987f4e2bb","Type":"ContainerDied","Data":"641ae134d356c3a060148a00e593785ff009e442131d7e563a39ec11a9116cad"} Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362701 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.362728 5028 scope.go:117] "RemoveContainer" containerID="b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.373433 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2884315-d4ef-4201-8391-b3954ea2c686-logs\") pod \"d2884315-d4ef-4201-8391-b3954ea2c686\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.373572 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-combined-ca-bundle\") pod \"d2884315-d4ef-4201-8391-b3954ea2c686\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.373642 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjclb\" (UniqueName: \"kubernetes.io/projected/d2884315-d4ef-4201-8391-b3954ea2c686-kube-api-access-rjclb\") pod \"d2884315-d4ef-4201-8391-b3954ea2c686\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.373704 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-config-data\") pod \"d2884315-d4ef-4201-8391-b3954ea2c686\" (UID: \"d2884315-d4ef-4201-8391-b3954ea2c686\") " Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.374326 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.374356 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbc902f5-b148-497d-9f9d-d15987f4e2bb-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.374369 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc902f5-b148-497d-9f9d-d15987f4e2bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.374386 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvp47\" (UniqueName: \"kubernetes.io/projected/bbc902f5-b148-497d-9f9d-d15987f4e2bb-kube-api-access-qvp47\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.381366 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2884315-d4ef-4201-8391-b3954ea2c686-logs" (OuterVolumeSpecName: "logs") pod "d2884315-d4ef-4201-8391-b3954ea2c686" (UID: "d2884315-d4ef-4201-8391-b3954ea2c686"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.401404 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2884315-d4ef-4201-8391-b3954ea2c686-kube-api-access-rjclb" (OuterVolumeSpecName: "kube-api-access-rjclb") pod "d2884315-d4ef-4201-8391-b3954ea2c686" (UID: "d2884315-d4ef-4201-8391-b3954ea2c686"). InnerVolumeSpecName "kube-api-access-rjclb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.407246 5028 generic.go:334] "Generic (PLEG): container finished" podID="d2884315-d4ef-4201-8391-b3954ea2c686" containerID="f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c" exitCode=0 Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.407453 5028 generic.go:334] "Generic (PLEG): container finished" podID="d2884315-d4ef-4201-8391-b3954ea2c686" containerID="3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5" exitCode=143 Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.407582 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2884315-d4ef-4201-8391-b3954ea2c686","Type":"ContainerDied","Data":"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c"} Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.407714 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2884315-d4ef-4201-8391-b3954ea2c686","Type":"ContainerDied","Data":"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5"} Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.407813 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d2884315-d4ef-4201-8391-b3954ea2c686","Type":"ContainerDied","Data":"5f2bb82e068f30678c095c636ab812affef513d39e75cc4ec2dddbf8146c78f5"} Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.407638 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.417750 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2884315-d4ef-4201-8391-b3954ea2c686" (UID: "d2884315-d4ef-4201-8391-b3954ea2c686"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.426283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-config-data" (OuterVolumeSpecName: "config-data") pod "d2884315-d4ef-4201-8391-b3954ea2c686" (UID: "d2884315-d4ef-4201-8391-b3954ea2c686"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.430459 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.449032 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.449356 5028 scope.go:117] "RemoveContainer" containerID="76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.458500 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.459142 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-log" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459209 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-log" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.459241 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-log" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459250 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-log" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.459263 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-metadata" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459271 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-metadata" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.459284 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-api" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459292 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-api" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459563 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-api" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459580 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" containerName="nova-api-log" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459595 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-metadata" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.459617 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" containerName="nova-metadata-log" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.475850 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.476029 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.476248 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.479563 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.480919 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.480983 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-config-data\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.481097 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5vw\" (UniqueName: \"kubernetes.io/projected/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-kube-api-access-4k5vw\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.481140 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-logs\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.482092 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2884315-d4ef-4201-8391-b3954ea2c686-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.482122 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.482137 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjclb\" (UniqueName: \"kubernetes.io/projected/d2884315-d4ef-4201-8391-b3954ea2c686-kube-api-access-rjclb\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.482151 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2884315-d4ef-4201-8391-b3954ea2c686-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.493782 5028 scope.go:117] "RemoveContainer" containerID="b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.494775 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54\": container with ID starting with b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54 not found: ID does not exist" containerID="b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.494848 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54"} err="failed to get container status \"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54\": rpc error: code = NotFound desc = could not find container \"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54\": container with ID starting with b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54 not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.494885 5028 scope.go:117] "RemoveContainer" containerID="76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.498890 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3\": container with ID starting with 76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3 not found: ID does not exist" containerID="76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.498922 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3"} err="failed to get container status \"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3\": rpc error: code = NotFound desc = could not find container \"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3\": container with ID starting with 76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3 not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.498956 5028 scope.go:117] "RemoveContainer" containerID="b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.499689 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54"} err="failed to get container status \"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54\": rpc error: code = NotFound desc = could not find container \"b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54\": container with ID starting with b74e2fed6a039923050f56cd8242f4aa612a71e4a184984d7e0b6b20ea55ef54 not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.499780 5028 scope.go:117] "RemoveContainer" containerID="76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.500781 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3"} err="failed to get container status \"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3\": rpc error: code = NotFound desc = could not find container \"76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3\": container with ID starting with 76f749adcd77f96bab266e01a351d3d157b566e2675ff24df09d5ccac389acf3 not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.500835 5028 scope.go:117] "RemoveContainer" containerID="f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.526141 5028 scope.go:117] "RemoveContainer" containerID="3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.558077 5028 scope.go:117] "RemoveContainer" containerID="f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.558579 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c\": container with ID starting with f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c not found: ID does not exist" containerID="f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.558629 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c"} err="failed to get container status \"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c\": rpc error: code = NotFound desc = could not find container \"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c\": container with ID starting with f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.558666 5028 scope.go:117] "RemoveContainer" containerID="3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5" Nov 23 08:47:16 crc kubenswrapper[5028]: E1123 08:47:16.559247 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5\": container with ID starting with 3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5 not found: ID does not exist" containerID="3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.559302 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5"} err="failed to get container status \"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5\": rpc error: code = NotFound desc = could not find container \"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5\": container with ID starting with 3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5 not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.559342 5028 scope.go:117] "RemoveContainer" containerID="f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.559755 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c"} err="failed to get container status \"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c\": rpc error: code = NotFound desc = could not find container \"f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c\": container with ID starting with f60044877b0bbafed9917cf02ee5074c249fa5a98fc5b4d575e969835293ad6c not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.559799 5028 scope.go:117] "RemoveContainer" containerID="3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.560361 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5"} err="failed to get container status \"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5\": rpc error: code = NotFound desc = could not find container \"3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5\": container with ID starting with 3c9c71daf9e5df22f4dfdddbdbdcd0d0b25aee7d108c321b83492dc7a8ab2ad5 not found: ID does not exist" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.584781 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.584839 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-config-data\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.584979 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5vw\" (UniqueName: \"kubernetes.io/projected/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-kube-api-access-4k5vw\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.585024 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-logs\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.585859 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-logs\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.589223 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.595778 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-config-data\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.608299 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5vw\" (UniqueName: \"kubernetes.io/projected/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-kube-api-access-4k5vw\") pod \"nova-api-0\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.729324 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.749137 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.751076 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.753087 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.770168 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.785319 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.787827 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.790591 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-config-data\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.790748 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgls4\" (UniqueName: \"kubernetes.io/projected/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-kube-api-access-hgls4\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.790894 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.790925 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-logs\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.798995 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.806681 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.842933 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.892587 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-config-data\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.892695 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgls4\" (UniqueName: \"kubernetes.io/projected/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-kube-api-access-hgls4\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.892796 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.892820 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-logs\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.893407 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-logs\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.904418 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df64d79bf-rn84r"] Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.904878 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerName="dnsmasq-dns" containerID="cri-o://ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d" gracePeriod=10 Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.904882 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-config-data\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.906739 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:16 crc kubenswrapper[5028]: I1123 08:47:16.961531 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgls4\" (UniqueName: \"kubernetes.io/projected/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-kube-api-access-hgls4\") pod \"nova-metadata-0\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " pod="openstack/nova-metadata-0" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.073853 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc902f5-b148-497d-9f9d-d15987f4e2bb" path="/var/lib/kubelet/pods/bbc902f5-b148-497d-9f9d-d15987f4e2bb/volumes" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.075219 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2884315-d4ef-4201-8391-b3954ea2c686" path="/var/lib/kubelet/pods/d2884315-d4ef-4201-8391-b3954ea2c686/volumes" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.076302 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.316596 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.406838 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.422892 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-nb\") pod \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.423036 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-425hl\" (UniqueName: \"kubernetes.io/projected/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-kube-api-access-425hl\") pod \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.423072 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-config\") pod \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.423179 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-dns-svc\") pod \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.426167 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"500cdf4f-8422-4fe1-942e-6db6cbcbed60","Type":"ContainerStarted","Data":"f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1"} Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.426205 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"500cdf4f-8422-4fe1-942e-6db6cbcbed60","Type":"ContainerStarted","Data":"133acae7253b9ca5c9aea3a3019841165f54791ff90e974410d8b467ed47da04"} Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.426261 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.452209 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-kube-api-access-425hl" (OuterVolumeSpecName: "kube-api-access-425hl") pod "92edd648-c9d1-49df-8d5e-ab22f0e96a9b" (UID: "92edd648-c9d1-49df-8d5e-ab22f0e96a9b"). InnerVolumeSpecName "kube-api-access-425hl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.452967 5028 generic.go:334] "Generic (PLEG): container finished" podID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerID="ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d" exitCode=0 Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.453076 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" event={"ID":"92edd648-c9d1-49df-8d5e-ab22f0e96a9b","Type":"ContainerDied","Data":"ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d"} Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.453111 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" event={"ID":"92edd648-c9d1-49df-8d5e-ab22f0e96a9b","Type":"ContainerDied","Data":"c3e41d7daadf2d033c56c399f422e7ab2d9a446028d4741ef4eec229a8ed28e7"} Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.453127 5028 scope.go:117] "RemoveContainer" containerID="ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.454346 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-df64d79bf-rn84r" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.462366 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c","Type":"ContainerStarted","Data":"097d2cacf0c6a10d94185247542c96c659c8fc0f2435db2d81fcaabc0fd02d6a"} Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.488160 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.488130253 podStartE2EDuration="2.488130253s" podCreationTimestamp="2025-11-23 08:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:17.456569996 +0000 UTC m=+7021.153974775" watchObservedRunningTime="2025-11-23 08:47:17.488130253 +0000 UTC m=+7021.185535032" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.493119 5028 scope.go:117] "RemoveContainer" containerID="8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.497772 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.529170 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-sb\") pod \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\" (UID: \"92edd648-c9d1-49df-8d5e-ab22f0e96a9b\") " Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.535593 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-425hl\" (UniqueName: \"kubernetes.io/projected/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-kube-api-access-425hl\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.604819 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-config" (OuterVolumeSpecName: "config") pod "92edd648-c9d1-49df-8d5e-ab22f0e96a9b" (UID: "92edd648-c9d1-49df-8d5e-ab22f0e96a9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.620537 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92edd648-c9d1-49df-8d5e-ab22f0e96a9b" (UID: "92edd648-c9d1-49df-8d5e-ab22f0e96a9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.622576 5028 scope.go:117] "RemoveContainer" containerID="ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d" Nov 23 08:47:17 crc kubenswrapper[5028]: E1123 08:47:17.623305 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d\": container with ID starting with ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d not found: ID does not exist" containerID="ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.623358 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d"} err="failed to get container status \"ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d\": rpc error: code = NotFound desc = could not find container \"ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d\": container with ID starting with ec5193f35ab1a4abb5ed72e702c8848ed00859f983d9fdb3882e3e1107af7e2d not found: ID does not exist" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.623392 5028 scope.go:117] "RemoveContainer" containerID="8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c" Nov 23 08:47:17 crc kubenswrapper[5028]: E1123 08:47:17.624028 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c\": container with ID starting with 8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c not found: ID does not exist" containerID="8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.624169 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c"} err="failed to get container status \"8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c\": rpc error: code = NotFound desc = could not find container \"8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c\": container with ID starting with 8aee125c7bd1b9db7783682b2689e10923e1554ea8cd6a73688740c792e50d1c not found: ID does not exist" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.640029 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92edd648-c9d1-49df-8d5e-ab22f0e96a9b" (UID: "92edd648-c9d1-49df-8d5e-ab22f0e96a9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.640435 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.640473 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.640487 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.663135 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.672799 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "92edd648-c9d1-49df-8d5e-ab22f0e96a9b" (UID: "92edd648-c9d1-49df-8d5e-ab22f0e96a9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.741463 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/92edd648-c9d1-49df-8d5e-ab22f0e96a9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.797180 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-df64d79bf-rn84r"] Nov 23 08:47:17 crc kubenswrapper[5028]: I1123 08:47:17.806540 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-df64d79bf-rn84r"] Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.009392 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.057526 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-combined-ca-bundle\") pod \"6f56a33c-58f3-4870-83d2-b98624f85285\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.057595 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6k8l\" (UniqueName: \"kubernetes.io/projected/6f56a33c-58f3-4870-83d2-b98624f85285-kube-api-access-r6k8l\") pod \"6f56a33c-58f3-4870-83d2-b98624f85285\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.058886 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-config-data\") pod \"6f56a33c-58f3-4870-83d2-b98624f85285\" (UID: \"6f56a33c-58f3-4870-83d2-b98624f85285\") " Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.063030 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f56a33c-58f3-4870-83d2-b98624f85285-kube-api-access-r6k8l" (OuterVolumeSpecName: "kube-api-access-r6k8l") pod "6f56a33c-58f3-4870-83d2-b98624f85285" (UID: "6f56a33c-58f3-4870-83d2-b98624f85285"). InnerVolumeSpecName "kube-api-access-r6k8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.097597 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-config-data" (OuterVolumeSpecName: "config-data") pod "6f56a33c-58f3-4870-83d2-b98624f85285" (UID: "6f56a33c-58f3-4870-83d2-b98624f85285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.098529 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f56a33c-58f3-4870-83d2-b98624f85285" (UID: "6f56a33c-58f3-4870-83d2-b98624f85285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.162114 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.162179 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f56a33c-58f3-4870-83d2-b98624f85285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.162194 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6k8l\" (UniqueName: \"kubernetes.io/projected/6f56a33c-58f3-4870-83d2-b98624f85285-kube-api-access-r6k8l\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.472255 5028 generic.go:334] "Generic (PLEG): container finished" podID="6f56a33c-58f3-4870-83d2-b98624f85285" containerID="e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c" exitCode=0 Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.472469 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f56a33c-58f3-4870-83d2-b98624f85285","Type":"ContainerDied","Data":"e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.472798 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f56a33c-58f3-4870-83d2-b98624f85285","Type":"ContainerDied","Data":"7a91163a2e8f2eef91b39b3041340d232ec34e75f23c2871ebd2471ee036d7f9"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.472821 5028 scope.go:117] "RemoveContainer" containerID="e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.472555 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.492601 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c","Type":"ContainerStarted","Data":"e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.492657 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c","Type":"ContainerStarted","Data":"67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.522707 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c08b19ff-b2cf-49f1-b414-8ce61d82f04e","Type":"ContainerStarted","Data":"e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.522766 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c08b19ff-b2cf-49f1-b414-8ce61d82f04e","Type":"ContainerStarted","Data":"a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.522784 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c08b19ff-b2cf-49f1-b414-8ce61d82f04e","Type":"ContainerStarted","Data":"bc7963536c75213b24dec069295b0e8ccef39798aa254f44866a035d7de92234"} Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.534561 5028 scope.go:117] "RemoveContainer" containerID="e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c" Nov 23 08:47:18 crc kubenswrapper[5028]: E1123 08:47:18.535158 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c\": container with ID starting with e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c not found: ID does not exist" containerID="e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.535212 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c"} err="failed to get container status \"e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c\": rpc error: code = NotFound desc = could not find container \"e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c\": container with ID starting with e7463f97de55565f44909571751eb9ee43a93748d40acff1a0b0f6bbe9e2de3c not found: ID does not exist" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.548120 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.559500 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.567109 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:18 crc kubenswrapper[5028]: E1123 08:47:18.567878 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f56a33c-58f3-4870-83d2-b98624f85285" containerName="nova-scheduler-scheduler" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.567910 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f56a33c-58f3-4870-83d2-b98624f85285" containerName="nova-scheduler-scheduler" Nov 23 08:47:18 crc kubenswrapper[5028]: E1123 08:47:18.567967 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerName="init" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.567980 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerName="init" Nov 23 08:47:18 crc kubenswrapper[5028]: E1123 08:47:18.568010 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerName="dnsmasq-dns" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.568020 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerName="dnsmasq-dns" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.568255 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" containerName="dnsmasq-dns" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.568289 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f56a33c-58f3-4870-83d2-b98624f85285" containerName="nova-scheduler-scheduler" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.579654 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.579618139 podStartE2EDuration="2.579618139s" podCreationTimestamp="2025-11-23 08:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:18.529782221 +0000 UTC m=+7022.227187000" watchObservedRunningTime="2025-11-23 08:47:18.579618139 +0000 UTC m=+7022.277022928" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.594833 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.602363 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.641002 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.641891 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.641869552 podStartE2EDuration="2.641869552s" podCreationTimestamp="2025-11-23 08:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:18.590552488 +0000 UTC m=+7022.287957267" watchObservedRunningTime="2025-11-23 08:47:18.641869552 +0000 UTC m=+7022.339274331" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.682552 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvqn\" (UniqueName: \"kubernetes.io/projected/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-kube-api-access-4mvqn\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.682876 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-config-data\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.683081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.785488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mvqn\" (UniqueName: \"kubernetes.io/projected/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-kube-api-access-4mvqn\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.785650 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-config-data\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.785774 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.791525 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.800188 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-config-data\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.809475 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mvqn\" (UniqueName: \"kubernetes.io/projected/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-kube-api-access-4mvqn\") pod \"nova-scheduler-0\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:18 crc kubenswrapper[5028]: I1123 08:47:18.930891 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:19 crc kubenswrapper[5028]: I1123 08:47:19.067662 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f56a33c-58f3-4870-83d2-b98624f85285" path="/var/lib/kubelet/pods/6f56a33c-58f3-4870-83d2-b98624f85285/volumes" Nov 23 08:47:19 crc kubenswrapper[5028]: I1123 08:47:19.068358 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92edd648-c9d1-49df-8d5e-ab22f0e96a9b" path="/var/lib/kubelet/pods/92edd648-c9d1-49df-8d5e-ab22f0e96a9b/volumes" Nov 23 08:47:19 crc kubenswrapper[5028]: I1123 08:47:19.465928 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:19 crc kubenswrapper[5028]: I1123 08:47:19.532097 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2","Type":"ContainerStarted","Data":"ea1d185173f521dddd414e223057f9c6e076e6dc292232c16461b1a8ab520277"} Nov 23 08:47:20 crc kubenswrapper[5028]: I1123 08:47:20.556940 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2","Type":"ContainerStarted","Data":"d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2"} Nov 23 08:47:20 crc kubenswrapper[5028]: I1123 08:47:20.633756 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.633729714 podStartE2EDuration="2.633729714s" podCreationTimestamp="2025-11-23 08:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:20.605809146 +0000 UTC m=+7024.303213965" watchObservedRunningTime="2025-11-23 08:47:20.633729714 +0000 UTC m=+7024.331134533" Nov 23 08:47:21 crc kubenswrapper[5028]: I1123 08:47:21.054462 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:47:21 crc kubenswrapper[5028]: E1123 08:47:21.054997 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:47:22 crc kubenswrapper[5028]: I1123 08:47:22.076660 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:47:22 crc kubenswrapper[5028]: I1123 08:47:22.077074 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:47:23 crc kubenswrapper[5028]: I1123 08:47:23.931290 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:47:25 crc kubenswrapper[5028]: I1123 08:47:25.861375 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.347861 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jc7mh"] Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.350040 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.352265 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.352615 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.358495 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jc7mh"] Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.454412 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wkh\" (UniqueName: \"kubernetes.io/projected/0714cee8-f557-480e-b57a-badede4d39c5-kube-api-access-55wkh\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.454487 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-config-data\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.454580 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-scripts\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.454660 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.557124 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.557247 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55wkh\" (UniqueName: \"kubernetes.io/projected/0714cee8-f557-480e-b57a-badede4d39c5-kube-api-access-55wkh\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.557312 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-config-data\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.557453 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-scripts\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.565528 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.571594 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-config-data\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.578293 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55wkh\" (UniqueName: \"kubernetes.io/projected/0714cee8-f557-480e-b57a-badede4d39c5-kube-api-access-55wkh\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.583418 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-scripts\") pod \"nova-cell1-cell-mapping-jc7mh\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.684763 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.811245 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:47:26 crc kubenswrapper[5028]: I1123 08:47:26.811621 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.044864 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jc7mh"] Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.079753 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.079814 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.650846 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jc7mh" event={"ID":"0714cee8-f557-480e-b57a-badede4d39c5","Type":"ContainerStarted","Data":"da32e9b6731e66dd4dcbdd69d3f3ad9439794dfa945df0919fd4fa3f8e80fda2"} Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.651535 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jc7mh" event={"ID":"0714cee8-f557-480e-b57a-badede4d39c5","Type":"ContainerStarted","Data":"bbbda1ff914cc96df6ae30cf823c6a4b2debca646f32d7f6b3bd8438258bc9b4"} Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.682024 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jc7mh" podStartSLOduration=1.681999375 podStartE2EDuration="1.681999375s" podCreationTimestamp="2025-11-23 08:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:27.673654999 +0000 UTC m=+7031.371059778" watchObservedRunningTime="2025-11-23 08:47:27.681999375 +0000 UTC m=+7031.379404154" Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.893180 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.79:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:27 crc kubenswrapper[5028]: I1123 08:47:27.893180 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.79:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:28 crc kubenswrapper[5028]: I1123 08:47:28.161178 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.80:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:28 crc kubenswrapper[5028]: I1123 08:47:28.161192 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.80:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:28 crc kubenswrapper[5028]: I1123 08:47:28.931939 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 08:47:28 crc kubenswrapper[5028]: I1123 08:47:28.960668 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 08:47:29 crc kubenswrapper[5028]: I1123 08:47:29.726533 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 08:47:32 crc kubenswrapper[5028]: I1123 08:47:32.703472 5028 generic.go:334] "Generic (PLEG): container finished" podID="0714cee8-f557-480e-b57a-badede4d39c5" containerID="da32e9b6731e66dd4dcbdd69d3f3ad9439794dfa945df0919fd4fa3f8e80fda2" exitCode=0 Nov 23 08:47:32 crc kubenswrapper[5028]: I1123 08:47:32.703565 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jc7mh" event={"ID":"0714cee8-f557-480e-b57a-badede4d39c5","Type":"ContainerDied","Data":"da32e9b6731e66dd4dcbdd69d3f3ad9439794dfa945df0919fd4fa3f8e80fda2"} Nov 23 08:47:33 crc kubenswrapper[5028]: I1123 08:47:33.053565 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:47:33 crc kubenswrapper[5028]: I1123 08:47:33.718201 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"a64b72f76a8fe768b7b1776afaa348b6635ec48477767013a700f559c89fe286"} Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.108605 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.158145 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-config-data\") pod \"0714cee8-f557-480e-b57a-badede4d39c5\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.158261 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-combined-ca-bundle\") pod \"0714cee8-f557-480e-b57a-badede4d39c5\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.158348 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-scripts\") pod \"0714cee8-f557-480e-b57a-badede4d39c5\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.158413 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55wkh\" (UniqueName: \"kubernetes.io/projected/0714cee8-f557-480e-b57a-badede4d39c5-kube-api-access-55wkh\") pod \"0714cee8-f557-480e-b57a-badede4d39c5\" (UID: \"0714cee8-f557-480e-b57a-badede4d39c5\") " Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.167006 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0714cee8-f557-480e-b57a-badede4d39c5-kube-api-access-55wkh" (OuterVolumeSpecName: "kube-api-access-55wkh") pod "0714cee8-f557-480e-b57a-badede4d39c5" (UID: "0714cee8-f557-480e-b57a-badede4d39c5"). InnerVolumeSpecName "kube-api-access-55wkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.167544 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-scripts" (OuterVolumeSpecName: "scripts") pod "0714cee8-f557-480e-b57a-badede4d39c5" (UID: "0714cee8-f557-480e-b57a-badede4d39c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.189119 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0714cee8-f557-480e-b57a-badede4d39c5" (UID: "0714cee8-f557-480e-b57a-badede4d39c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.193713 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-config-data" (OuterVolumeSpecName: "config-data") pod "0714cee8-f557-480e-b57a-badede4d39c5" (UID: "0714cee8-f557-480e-b57a-badede4d39c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.261643 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.262180 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.262271 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0714cee8-f557-480e-b57a-badede4d39c5-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.262365 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55wkh\" (UniqueName: \"kubernetes.io/projected/0714cee8-f557-480e-b57a-badede4d39c5-kube-api-access-55wkh\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.735539 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jc7mh" event={"ID":"0714cee8-f557-480e-b57a-badede4d39c5","Type":"ContainerDied","Data":"bbbda1ff914cc96df6ae30cf823c6a4b2debca646f32d7f6b3bd8438258bc9b4"} Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.735590 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbda1ff914cc96df6ae30cf823c6a4b2debca646f32d7f6b3bd8438258bc9b4" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.735590 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jc7mh" Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.967392 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.975497 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-log" containerID="cri-o://67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6" gracePeriod=30 Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.976163 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-api" containerID="cri-o://e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a" gracePeriod=30 Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.989036 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.989439 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-metadata" containerID="cri-o://e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22" gracePeriod=30 Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.989641 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-log" containerID="cri-o://a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9" gracePeriod=30 Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.998528 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:34 crc kubenswrapper[5028]: I1123 08:47:34.998795 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" containerName="nova-scheduler-scheduler" containerID="cri-o://d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2" gracePeriod=30 Nov 23 08:47:35 crc kubenswrapper[5028]: I1123 08:47:35.747991 5028 generic.go:334] "Generic (PLEG): container finished" podID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerID="a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9" exitCode=143 Nov 23 08:47:35 crc kubenswrapper[5028]: I1123 08:47:35.748115 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c08b19ff-b2cf-49f1-b414-8ce61d82f04e","Type":"ContainerDied","Data":"a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9"} Nov 23 08:47:35 crc kubenswrapper[5028]: I1123 08:47:35.751661 5028 generic.go:334] "Generic (PLEG): container finished" podID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerID="67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6" exitCode=143 Nov 23 08:47:35 crc kubenswrapper[5028]: I1123 08:47:35.751695 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c","Type":"ContainerDied","Data":"67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6"} Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.593713 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.717943 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mvqn\" (UniqueName: \"kubernetes.io/projected/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-kube-api-access-4mvqn\") pod \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.718082 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-config-data\") pod \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.718129 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-combined-ca-bundle\") pod \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\" (UID: \"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2\") " Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.728241 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-kube-api-access-4mvqn" (OuterVolumeSpecName: "kube-api-access-4mvqn") pod "3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" (UID: "3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2"). InnerVolumeSpecName "kube-api-access-4mvqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.753200 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" (UID: "3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.756111 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-config-data" (OuterVolumeSpecName: "config-data") pod "3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" (UID: "3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.764599 5028 generic.go:334] "Generic (PLEG): container finished" podID="3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" containerID="d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2" exitCode=0 Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.764702 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2","Type":"ContainerDied","Data":"d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2"} Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.764796 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2","Type":"ContainerDied","Data":"ea1d185173f521dddd414e223057f9c6e076e6dc292232c16461b1a8ab520277"} Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.764822 5028 scope.go:117] "RemoveContainer" containerID="d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.764743 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.825018 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.825082 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.825098 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mvqn\" (UniqueName: \"kubernetes.io/projected/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2-kube-api-access-4mvqn\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.852240 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.853601 5028 scope.go:117] "RemoveContainer" containerID="d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2" Nov 23 08:47:36 crc kubenswrapper[5028]: E1123 08:47:36.854430 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2\": container with ID starting with d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2 not found: ID does not exist" containerID="d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.854479 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2"} err="failed to get container status \"d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2\": rpc error: code = NotFound desc = could not find container \"d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2\": container with ID starting with d95ea204e1f4522d49bb7b3d30ccf48f49f880cb665ff65e93f9929be5d94ef2 not found: ID does not exist" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.861702 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.885717 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:36 crc kubenswrapper[5028]: E1123 08:47:36.886374 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0714cee8-f557-480e-b57a-badede4d39c5" containerName="nova-manage" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.886405 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0714cee8-f557-480e-b57a-badede4d39c5" containerName="nova-manage" Nov 23 08:47:36 crc kubenswrapper[5028]: E1123 08:47:36.886441 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" containerName="nova-scheduler-scheduler" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.886452 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" containerName="nova-scheduler-scheduler" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.886642 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" containerName="nova-scheduler-scheduler" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.886668 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0714cee8-f557-480e-b57a-badede4d39c5" containerName="nova-manage" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.887658 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.889572 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:47:36 crc kubenswrapper[5028]: I1123 08:47:36.896217 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.028724 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2bq\" (UniqueName: \"kubernetes.io/projected/d5b5f042-5da4-4b17-b14d-a0aedb34a160-kube-api-access-4r2bq\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.028767 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.028798 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-config-data\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.066201 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2" path="/var/lib/kubelet/pods/3dd1fb20-c7c3-42aa-93e2-b47d1b96c8a2/volumes" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.132716 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2bq\" (UniqueName: \"kubernetes.io/projected/d5b5f042-5da4-4b17-b14d-a0aedb34a160-kube-api-access-4r2bq\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.132803 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.132834 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-config-data\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.138567 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.140920 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-config-data\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.155387 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2bq\" (UniqueName: \"kubernetes.io/projected/d5b5f042-5da4-4b17-b14d-a0aedb34a160-kube-api-access-4r2bq\") pod \"nova-scheduler-0\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.216694 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.714218 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:47:37 crc kubenswrapper[5028]: I1123 08:47:37.781019 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b5f042-5da4-4b17-b14d-a0aedb34a160","Type":"ContainerStarted","Data":"346004100ea50a79628effd747594df26e1b6dc375b990f341315da52cff9785"} Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.685137 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.696313 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774143 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-config-data\") pod \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774364 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-combined-ca-bundle\") pod \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774415 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-logs\") pod \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774444 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgls4\" (UniqueName: \"kubernetes.io/projected/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-kube-api-access-hgls4\") pod \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774493 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-combined-ca-bundle\") pod \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774537 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k5vw\" (UniqueName: \"kubernetes.io/projected/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-kube-api-access-4k5vw\") pod \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774590 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-logs\") pod \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\" (UID: \"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.774635 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-config-data\") pod \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\" (UID: \"c08b19ff-b2cf-49f1-b414-8ce61d82f04e\") " Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.775580 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-logs" (OuterVolumeSpecName: "logs") pod "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" (UID: "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.775718 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-logs" (OuterVolumeSpecName: "logs") pod "c08b19ff-b2cf-49f1-b414-8ce61d82f04e" (UID: "c08b19ff-b2cf-49f1-b414-8ce61d82f04e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.782659 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-kube-api-access-4k5vw" (OuterVolumeSpecName: "kube-api-access-4k5vw") pod "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" (UID: "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c"). InnerVolumeSpecName "kube-api-access-4k5vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.784841 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-kube-api-access-hgls4" (OuterVolumeSpecName: "kube-api-access-hgls4") pod "c08b19ff-b2cf-49f1-b414-8ce61d82f04e" (UID: "c08b19ff-b2cf-49f1-b414-8ce61d82f04e"). InnerVolumeSpecName "kube-api-access-hgls4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.799069 5028 generic.go:334] "Generic (PLEG): container finished" podID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerID="e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a" exitCode=0 Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.799365 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.799148 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c","Type":"ContainerDied","Data":"e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a"} Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.799823 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c00f3f7-b2d2-45aa-9eef-c7a60aef731c","Type":"ContainerDied","Data":"097d2cacf0c6a10d94185247542c96c659c8fc0f2435db2d81fcaabc0fd02d6a"} Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.799895 5028 scope.go:117] "RemoveContainer" containerID="e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.802376 5028 generic.go:334] "Generic (PLEG): container finished" podID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerID="e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22" exitCode=0 Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.802445 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c08b19ff-b2cf-49f1-b414-8ce61d82f04e","Type":"ContainerDied","Data":"e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22"} Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.802481 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c08b19ff-b2cf-49f1-b414-8ce61d82f04e","Type":"ContainerDied","Data":"bc7963536c75213b24dec069295b0e8ccef39798aa254f44866a035d7de92234"} Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.802529 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.806466 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b5f042-5da4-4b17-b14d-a0aedb34a160","Type":"ContainerStarted","Data":"23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a"} Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.807528 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" (UID: "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.824110 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c08b19ff-b2cf-49f1-b414-8ce61d82f04e" (UID: "c08b19ff-b2cf-49f1-b414-8ce61d82f04e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.824993 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-config-data" (OuterVolumeSpecName: "config-data") pod "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" (UID: "2c00f3f7-b2d2-45aa-9eef-c7a60aef731c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.831087 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-config-data" (OuterVolumeSpecName: "config-data") pod "c08b19ff-b2cf-49f1-b414-8ce61d82f04e" (UID: "c08b19ff-b2cf-49f1-b414-8ce61d82f04e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.831314 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.831287793 podStartE2EDuration="2.831287793s" podCreationTimestamp="2025-11-23 08:47:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:38.831272573 +0000 UTC m=+7042.528677362" watchObservedRunningTime="2025-11-23 08:47:38.831287793 +0000 UTC m=+7042.528692572" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877492 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877541 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877556 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877571 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgls4\" (UniqueName: \"kubernetes.io/projected/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-kube-api-access-hgls4\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877582 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877591 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k5vw\" (UniqueName: \"kubernetes.io/projected/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-kube-api-access-4k5vw\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877599 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.877607 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c08b19ff-b2cf-49f1-b414-8ce61d82f04e-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.906080 5028 scope.go:117] "RemoveContainer" containerID="67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.929927 5028 scope.go:117] "RemoveContainer" containerID="e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a" Nov 23 08:47:38 crc kubenswrapper[5028]: E1123 08:47:38.930458 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a\": container with ID starting with e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a not found: ID does not exist" containerID="e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.930495 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a"} err="failed to get container status \"e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a\": rpc error: code = NotFound desc = could not find container \"e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a\": container with ID starting with e9f7b16157be0c263c325fd3e07e82b947ea97177032d71761202dae498ca91a not found: ID does not exist" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.930526 5028 scope.go:117] "RemoveContainer" containerID="67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6" Nov 23 08:47:38 crc kubenswrapper[5028]: E1123 08:47:38.930853 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6\": container with ID starting with 67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6 not found: ID does not exist" containerID="67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.930916 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6"} err="failed to get container status \"67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6\": rpc error: code = NotFound desc = could not find container \"67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6\": container with ID starting with 67f4bf19f0c03e0f92eab0a8f111e3513708d6f5e1b1e7cc2c528ef42e9c17a6 not found: ID does not exist" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.930970 5028 scope.go:117] "RemoveContainer" containerID="e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.957197 5028 scope.go:117] "RemoveContainer" containerID="a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.979158 5028 scope.go:117] "RemoveContainer" containerID="e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22" Nov 23 08:47:38 crc kubenswrapper[5028]: E1123 08:47:38.979747 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22\": container with ID starting with e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22 not found: ID does not exist" containerID="e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.979858 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22"} err="failed to get container status \"e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22\": rpc error: code = NotFound desc = could not find container \"e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22\": container with ID starting with e8358888c89aace9336abf93268aebd05f0dcea3597a8fb5a6f59d5628847c22 not found: ID does not exist" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.979945 5028 scope.go:117] "RemoveContainer" containerID="a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9" Nov 23 08:47:38 crc kubenswrapper[5028]: E1123 08:47:38.980597 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9\": container with ID starting with a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9 not found: ID does not exist" containerID="a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9" Nov 23 08:47:38 crc kubenswrapper[5028]: I1123 08:47:38.980634 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9"} err="failed to get container status \"a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9\": rpc error: code = NotFound desc = could not find container \"a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9\": container with ID starting with a2449a99fcb2a663106be0dabb091aa4f3a987ff647ce13ebb0dbbe85f349dc9 not found: ID does not exist" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.136371 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.149234 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.159315 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.176616 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.193913 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: E1123 08:47:39.194411 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-log" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194432 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-log" Nov 23 08:47:39 crc kubenswrapper[5028]: E1123 08:47:39.194463 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-log" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194473 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-log" Nov 23 08:47:39 crc kubenswrapper[5028]: E1123 08:47:39.194489 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-metadata" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194500 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-metadata" Nov 23 08:47:39 crc kubenswrapper[5028]: E1123 08:47:39.194530 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-api" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194539 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-api" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194742 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-api" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194757 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" containerName="nova-api-log" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194774 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-log" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.194784 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" containerName="nova-metadata-metadata" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.195933 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.200834 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.210925 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.212553 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.214961 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.234019 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.249017 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.285877 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-config-data\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286028 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169ea2a5-1ce4-459b-adc6-9bdc8b517234-logs\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286103 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286223 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286266 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-config-data\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286333 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqs8d\" (UniqueName: \"kubernetes.io/projected/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-kube-api-access-bqs8d\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286414 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-logs\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.286483 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl599\" (UniqueName: \"kubernetes.io/projected/169ea2a5-1ce4-459b-adc6-9bdc8b517234-kube-api-access-hl599\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.389625 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl599\" (UniqueName: \"kubernetes.io/projected/169ea2a5-1ce4-459b-adc6-9bdc8b517234-kube-api-access-hl599\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.389808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-config-data\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.389932 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169ea2a5-1ce4-459b-adc6-9bdc8b517234-logs\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.390070 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.390146 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.390242 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-config-data\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.390294 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqs8d\" (UniqueName: \"kubernetes.io/projected/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-kube-api-access-bqs8d\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.390404 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-logs\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.390451 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169ea2a5-1ce4-459b-adc6-9bdc8b517234-logs\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.391230 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-logs\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.407839 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-config-data\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.408083 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.408159 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.408684 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-config-data\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.410260 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl599\" (UniqueName: \"kubernetes.io/projected/169ea2a5-1ce4-459b-adc6-9bdc8b517234-kube-api-access-hl599\") pod \"nova-api-0\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.411091 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqs8d\" (UniqueName: \"kubernetes.io/projected/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-kube-api-access-bqs8d\") pod \"nova-metadata-0\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " pod="openstack/nova-metadata-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.522651 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:47:39 crc kubenswrapper[5028]: I1123 08:47:39.536722 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.087909 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:47:40 crc kubenswrapper[5028]: W1123 08:47:40.169200 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod653add63_a41e_4c9c_8fd4_ea7a5d0c2d63.slice/crio-41c18766d009627e345a26e6d54ee18d15e1ca3db5a89ad6a152ef1777f563a2 WatchSource:0}: Error finding container 41c18766d009627e345a26e6d54ee18d15e1ca3db5a89ad6a152ef1777f563a2: Status 404 returned error can't find the container with id 41c18766d009627e345a26e6d54ee18d15e1ca3db5a89ad6a152ef1777f563a2 Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.170184 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.834521 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"169ea2a5-1ce4-459b-adc6-9bdc8b517234","Type":"ContainerStarted","Data":"6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698"} Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.834993 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"169ea2a5-1ce4-459b-adc6-9bdc8b517234","Type":"ContainerStarted","Data":"cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a"} Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.835019 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"169ea2a5-1ce4-459b-adc6-9bdc8b517234","Type":"ContainerStarted","Data":"bd1dcb62f2b338cac6d2babbf294964770f8f47db0e7a93cd52a16d6ee7066d4"} Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.836123 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63","Type":"ContainerStarted","Data":"2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656"} Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.836193 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63","Type":"ContainerStarted","Data":"3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70"} Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.836210 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63","Type":"ContainerStarted","Data":"41c18766d009627e345a26e6d54ee18d15e1ca3db5a89ad6a152ef1777f563a2"} Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.865102 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.86507254 podStartE2EDuration="1.86507254s" podCreationTimestamp="2025-11-23 08:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:40.856460638 +0000 UTC m=+7044.553865437" watchObservedRunningTime="2025-11-23 08:47:40.86507254 +0000 UTC m=+7044.562477349" Nov 23 08:47:40 crc kubenswrapper[5028]: I1123 08:47:40.880631 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.880602352 podStartE2EDuration="1.880602352s" podCreationTimestamp="2025-11-23 08:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:47:40.878336446 +0000 UTC m=+7044.575741235" watchObservedRunningTime="2025-11-23 08:47:40.880602352 +0000 UTC m=+7044.578007141" Nov 23 08:47:41 crc kubenswrapper[5028]: I1123 08:47:41.073212 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c00f3f7-b2d2-45aa-9eef-c7a60aef731c" path="/var/lib/kubelet/pods/2c00f3f7-b2d2-45aa-9eef-c7a60aef731c/volumes" Nov 23 08:47:41 crc kubenswrapper[5028]: I1123 08:47:41.074749 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c08b19ff-b2cf-49f1-b414-8ce61d82f04e" path="/var/lib/kubelet/pods/c08b19ff-b2cf-49f1-b414-8ce61d82f04e/volumes" Nov 23 08:47:42 crc kubenswrapper[5028]: I1123 08:47:42.216921 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:47:44 crc kubenswrapper[5028]: I1123 08:47:44.538367 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:47:44 crc kubenswrapper[5028]: I1123 08:47:44.539488 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:47:47 crc kubenswrapper[5028]: I1123 08:47:47.217083 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 08:47:47 crc kubenswrapper[5028]: I1123 08:47:47.246494 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 08:47:47 crc kubenswrapper[5028]: I1123 08:47:47.970840 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 08:47:49 crc kubenswrapper[5028]: I1123 08:47:49.523364 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:47:49 crc kubenswrapper[5028]: I1123 08:47:49.523790 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:47:49 crc kubenswrapper[5028]: I1123 08:47:49.538009 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:47:49 crc kubenswrapper[5028]: I1123 08:47:49.538061 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:47:50 crc kubenswrapper[5028]: I1123 08:47:50.606610 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:50 crc kubenswrapper[5028]: I1123 08:47:50.689301 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:50 crc kubenswrapper[5028]: I1123 08:47:50.689559 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:50 crc kubenswrapper[5028]: I1123 08:47:50.689313 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.527311 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.528192 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.528497 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.528534 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.531464 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.532221 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.540488 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.542369 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.543473 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.782457 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dbc79d8fc-pl86c"] Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.784215 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.816056 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dbc79d8fc-pl86c"] Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.945033 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-config\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.945151 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-dns-svc\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.945243 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.945276 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgbw\" (UniqueName: \"kubernetes.io/projected/4daf4db5-130e-4432-91b6-e824cf7bedbf-kube-api-access-jwgbw\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:47:59 crc kubenswrapper[5028]: I1123 08:47:59.945351 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.046971 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-dns-svc\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.047076 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.047095 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwgbw\" (UniqueName: \"kubernetes.io/projected/4daf4db5-130e-4432-91b6-e824cf7bedbf-kube-api-access-jwgbw\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.047120 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.047185 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-config\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.048229 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-config\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.049097 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-nb\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.049406 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-dns-svc\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.049511 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-sb\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.081624 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwgbw\" (UniqueName: \"kubernetes.io/projected/4daf4db5-130e-4432-91b6-e824cf7bedbf-kube-api-access-jwgbw\") pod \"dnsmasq-dns-5dbc79d8fc-pl86c\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.114901 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.135395 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 08:48:00 crc kubenswrapper[5028]: I1123 08:48:00.689508 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dbc79d8fc-pl86c"] Nov 23 08:48:00 crc kubenswrapper[5028]: W1123 08:48:00.706456 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4daf4db5_130e_4432_91b6_e824cf7bedbf.slice/crio-67b4bab42687f7445924963a12f2f621cab6e50a3c0e8a31f8492a12be0c73ff WatchSource:0}: Error finding container 67b4bab42687f7445924963a12f2f621cab6e50a3c0e8a31f8492a12be0c73ff: Status 404 returned error can't find the container with id 67b4bab42687f7445924963a12f2f621cab6e50a3c0e8a31f8492a12be0c73ff Nov 23 08:48:01 crc kubenswrapper[5028]: I1123 08:48:01.140504 5028 generic.go:334] "Generic (PLEG): container finished" podID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerID="1ba2ad153e5a7376ccfc9006ebf9eaf3fdd9b5450c22965b7b117cfdb597044e" exitCode=0 Nov 23 08:48:01 crc kubenswrapper[5028]: I1123 08:48:01.140570 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" event={"ID":"4daf4db5-130e-4432-91b6-e824cf7bedbf","Type":"ContainerDied","Data":"1ba2ad153e5a7376ccfc9006ebf9eaf3fdd9b5450c22965b7b117cfdb597044e"} Nov 23 08:48:01 crc kubenswrapper[5028]: I1123 08:48:01.141030 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" event={"ID":"4daf4db5-130e-4432-91b6-e824cf7bedbf","Type":"ContainerStarted","Data":"67b4bab42687f7445924963a12f2f621cab6e50a3c0e8a31f8492a12be0c73ff"} Nov 23 08:48:02 crc kubenswrapper[5028]: I1123 08:48:02.153227 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" event={"ID":"4daf4db5-130e-4432-91b6-e824cf7bedbf","Type":"ContainerStarted","Data":"7d4b2790266e05611c0e5dd54eb94744c877597a80f2d81a867935189d371dc3"} Nov 23 08:48:02 crc kubenswrapper[5028]: I1123 08:48:02.154315 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:02 crc kubenswrapper[5028]: I1123 08:48:02.188713 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" podStartSLOduration=3.18868433 podStartE2EDuration="3.18868433s" podCreationTimestamp="2025-11-23 08:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:02.177763081 +0000 UTC m=+7065.875167900" watchObservedRunningTime="2025-11-23 08:48:02.18868433 +0000 UTC m=+7065.886089119" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.118226 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.214125 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697f8b7d5c-v5mpz"] Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.214467 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerName="dnsmasq-dns" containerID="cri-o://5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad" gracePeriod=10 Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.791902 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.890660 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-config\") pod \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.890736 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-dns-svc\") pod \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.891536 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-nb\") pod \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.891577 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbq6t\" (UniqueName: \"kubernetes.io/projected/122a1da8-80b6-47bf-baf6-1cd7010c8bab-kube-api-access-rbq6t\") pod \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.891697 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-sb\") pod \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\" (UID: \"122a1da8-80b6-47bf-baf6-1cd7010c8bab\") " Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.900237 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122a1da8-80b6-47bf-baf6-1cd7010c8bab-kube-api-access-rbq6t" (OuterVolumeSpecName: "kube-api-access-rbq6t") pod "122a1da8-80b6-47bf-baf6-1cd7010c8bab" (UID: "122a1da8-80b6-47bf-baf6-1cd7010c8bab"). InnerVolumeSpecName "kube-api-access-rbq6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.951872 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-config" (OuterVolumeSpecName: "config") pod "122a1da8-80b6-47bf-baf6-1cd7010c8bab" (UID: "122a1da8-80b6-47bf-baf6-1cd7010c8bab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.952474 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "122a1da8-80b6-47bf-baf6-1cd7010c8bab" (UID: "122a1da8-80b6-47bf-baf6-1cd7010c8bab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.952453 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "122a1da8-80b6-47bf-baf6-1cd7010c8bab" (UID: "122a1da8-80b6-47bf-baf6-1cd7010c8bab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.964129 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "122a1da8-80b6-47bf-baf6-1cd7010c8bab" (UID: "122a1da8-80b6-47bf-baf6-1cd7010c8bab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.995217 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.995255 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbq6t\" (UniqueName: \"kubernetes.io/projected/122a1da8-80b6-47bf-baf6-1cd7010c8bab-kube-api-access-rbq6t\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.995266 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.995277 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:10 crc kubenswrapper[5028]: I1123 08:48:10.995285 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122a1da8-80b6-47bf-baf6-1cd7010c8bab-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.251487 5028 generic.go:334] "Generic (PLEG): container finished" podID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerID="5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad" exitCode=0 Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.251541 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" event={"ID":"122a1da8-80b6-47bf-baf6-1cd7010c8bab","Type":"ContainerDied","Data":"5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad"} Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.251576 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" event={"ID":"122a1da8-80b6-47bf-baf6-1cd7010c8bab","Type":"ContainerDied","Data":"e9eaf234780b974b0d08b8e0ad26ac4a9de332817333d7feecfbf1e14ba58181"} Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.251605 5028 scope.go:117] "RemoveContainer" containerID="5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.253473 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-697f8b7d5c-v5mpz" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.291778 5028 scope.go:117] "RemoveContainer" containerID="6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.296350 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-697f8b7d5c-v5mpz"] Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.309313 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-697f8b7d5c-v5mpz"] Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.319551 5028 scope.go:117] "RemoveContainer" containerID="5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad" Nov 23 08:48:11 crc kubenswrapper[5028]: E1123 08:48:11.320448 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad\": container with ID starting with 5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad not found: ID does not exist" containerID="5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.320523 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad"} err="failed to get container status \"5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad\": rpc error: code = NotFound desc = could not find container \"5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad\": container with ID starting with 5d325974ac5ce5d8508fbbc253d6169f468d99b5b601ac277b88687e982491ad not found: ID does not exist" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.320575 5028 scope.go:117] "RemoveContainer" containerID="6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701" Nov 23 08:48:11 crc kubenswrapper[5028]: E1123 08:48:11.324192 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701\": container with ID starting with 6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701 not found: ID does not exist" containerID="6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701" Nov 23 08:48:11 crc kubenswrapper[5028]: I1123 08:48:11.324251 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701"} err="failed to get container status \"6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701\": rpc error: code = NotFound desc = could not find container \"6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701\": container with ID starting with 6f1df0203e8ed0ea5db0e30a3d1e2592c81377d80da0365a35a37d22549f0701 not found: ID does not exist" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.022995 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fvxjt"] Nov 23 08:48:13 crc kubenswrapper[5028]: E1123 08:48:13.023550 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerName="init" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.023567 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerName="init" Nov 23 08:48:13 crc kubenswrapper[5028]: E1123 08:48:13.023581 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerName="dnsmasq-dns" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.023589 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerName="dnsmasq-dns" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.023786 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" containerName="dnsmasq-dns" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.024534 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.037014 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fvxjt"] Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.072592 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="122a1da8-80b6-47bf-baf6-1cd7010c8bab" path="/var/lib/kubelet/pods/122a1da8-80b6-47bf-baf6-1cd7010c8bab/volumes" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.126495 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6349-account-create-b48mk"] Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.127927 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.131277 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.140197 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hxvw\" (UniqueName: \"kubernetes.io/projected/f399b58f-3799-485c-8746-f4a117f83149-kube-api-access-2hxvw\") pod \"cinder-db-create-fvxjt\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.140269 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f399b58f-3799-485c-8746-f4a117f83149-operator-scripts\") pod \"cinder-db-create-fvxjt\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.145410 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6349-account-create-b48mk"] Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.243094 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-operator-scripts\") pod \"cinder-6349-account-create-b48mk\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.243189 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24p8k\" (UniqueName: \"kubernetes.io/projected/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-kube-api-access-24p8k\") pod \"cinder-6349-account-create-b48mk\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.243255 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hxvw\" (UniqueName: \"kubernetes.io/projected/f399b58f-3799-485c-8746-f4a117f83149-kube-api-access-2hxvw\") pod \"cinder-db-create-fvxjt\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.243283 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f399b58f-3799-485c-8746-f4a117f83149-operator-scripts\") pod \"cinder-db-create-fvxjt\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.244133 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f399b58f-3799-485c-8746-f4a117f83149-operator-scripts\") pod \"cinder-db-create-fvxjt\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.273683 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hxvw\" (UniqueName: \"kubernetes.io/projected/f399b58f-3799-485c-8746-f4a117f83149-kube-api-access-2hxvw\") pod \"cinder-db-create-fvxjt\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.342721 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.357080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-operator-scripts\") pod \"cinder-6349-account-create-b48mk\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.357190 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24p8k\" (UniqueName: \"kubernetes.io/projected/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-kube-api-access-24p8k\") pod \"cinder-6349-account-create-b48mk\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.358293 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-operator-scripts\") pod \"cinder-6349-account-create-b48mk\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.381585 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24p8k\" (UniqueName: \"kubernetes.io/projected/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-kube-api-access-24p8k\") pod \"cinder-6349-account-create-b48mk\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.463517 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.889386 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fvxjt"] Nov 23 08:48:13 crc kubenswrapper[5028]: I1123 08:48:13.973103 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6349-account-create-b48mk"] Nov 23 08:48:13 crc kubenswrapper[5028]: W1123 08:48:13.979523 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod875a3c0a_11dc_40cd_a95a_6c6603fe13bb.slice/crio-c416c2eb348426d948d2d6d2cb46d3595fe5f8d1ffa5fbe5f69535545e09495c WatchSource:0}: Error finding container c416c2eb348426d948d2d6d2cb46d3595fe5f8d1ffa5fbe5f69535545e09495c: Status 404 returned error can't find the container with id c416c2eb348426d948d2d6d2cb46d3595fe5f8d1ffa5fbe5f69535545e09495c Nov 23 08:48:14 crc kubenswrapper[5028]: I1123 08:48:14.289670 5028 generic.go:334] "Generic (PLEG): container finished" podID="f399b58f-3799-485c-8746-f4a117f83149" containerID="bc664f4416770ad828ace2eb7e40df5e5a4fee1627dc37131dc742db729a751b" exitCode=0 Nov 23 08:48:14 crc kubenswrapper[5028]: I1123 08:48:14.289893 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fvxjt" event={"ID":"f399b58f-3799-485c-8746-f4a117f83149","Type":"ContainerDied","Data":"bc664f4416770ad828ace2eb7e40df5e5a4fee1627dc37131dc742db729a751b"} Nov 23 08:48:14 crc kubenswrapper[5028]: I1123 08:48:14.290195 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fvxjt" event={"ID":"f399b58f-3799-485c-8746-f4a117f83149","Type":"ContainerStarted","Data":"4ae3c60e1f945f0e1ed8fdb377e580b61596d95d2c22b8970861d4316a8e9e66"} Nov 23 08:48:14 crc kubenswrapper[5028]: I1123 08:48:14.292286 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6349-account-create-b48mk" event={"ID":"875a3c0a-11dc-40cd-a95a-6c6603fe13bb","Type":"ContainerStarted","Data":"1a764db8ca5db41eb2a563a5cf1c101953c603454a86e438477225207cf7826a"} Nov 23 08:48:14 crc kubenswrapper[5028]: I1123 08:48:14.292315 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6349-account-create-b48mk" event={"ID":"875a3c0a-11dc-40cd-a95a-6c6603fe13bb","Type":"ContainerStarted","Data":"c416c2eb348426d948d2d6d2cb46d3595fe5f8d1ffa5fbe5f69535545e09495c"} Nov 23 08:48:14 crc kubenswrapper[5028]: I1123 08:48:14.340390 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6349-account-create-b48mk" podStartSLOduration=1.340365058 podStartE2EDuration="1.340365058s" podCreationTimestamp="2025-11-23 08:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:14.330592588 +0000 UTC m=+7078.027997367" watchObservedRunningTime="2025-11-23 08:48:14.340365058 +0000 UTC m=+7078.037769837" Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.312107 5028 generic.go:334] "Generic (PLEG): container finished" podID="875a3c0a-11dc-40cd-a95a-6c6603fe13bb" containerID="1a764db8ca5db41eb2a563a5cf1c101953c603454a86e438477225207cf7826a" exitCode=0 Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.312213 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6349-account-create-b48mk" event={"ID":"875a3c0a-11dc-40cd-a95a-6c6603fe13bb","Type":"ContainerDied","Data":"1a764db8ca5db41eb2a563a5cf1c101953c603454a86e438477225207cf7826a"} Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.732360 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.811465 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hxvw\" (UniqueName: \"kubernetes.io/projected/f399b58f-3799-485c-8746-f4a117f83149-kube-api-access-2hxvw\") pod \"f399b58f-3799-485c-8746-f4a117f83149\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.811629 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f399b58f-3799-485c-8746-f4a117f83149-operator-scripts\") pod \"f399b58f-3799-485c-8746-f4a117f83149\" (UID: \"f399b58f-3799-485c-8746-f4a117f83149\") " Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.815961 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399b58f-3799-485c-8746-f4a117f83149-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f399b58f-3799-485c-8746-f4a117f83149" (UID: "f399b58f-3799-485c-8746-f4a117f83149"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.825200 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f399b58f-3799-485c-8746-f4a117f83149-kube-api-access-2hxvw" (OuterVolumeSpecName: "kube-api-access-2hxvw") pod "f399b58f-3799-485c-8746-f4a117f83149" (UID: "f399b58f-3799-485c-8746-f4a117f83149"). InnerVolumeSpecName "kube-api-access-2hxvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.917460 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hxvw\" (UniqueName: \"kubernetes.io/projected/f399b58f-3799-485c-8746-f4a117f83149-kube-api-access-2hxvw\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:15 crc kubenswrapper[5028]: I1123 08:48:15.917535 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f399b58f-3799-485c-8746-f4a117f83149-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.329067 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fvxjt" Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.329511 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fvxjt" event={"ID":"f399b58f-3799-485c-8746-f4a117f83149","Type":"ContainerDied","Data":"4ae3c60e1f945f0e1ed8fdb377e580b61596d95d2c22b8970861d4316a8e9e66"} Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.329571 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae3c60e1f945f0e1ed8fdb377e580b61596d95d2c22b8970861d4316a8e9e66" Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.778673 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.939885 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-operator-scripts\") pod \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.940136 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24p8k\" (UniqueName: \"kubernetes.io/projected/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-kube-api-access-24p8k\") pod \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\" (UID: \"875a3c0a-11dc-40cd-a95a-6c6603fe13bb\") " Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.941424 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "875a3c0a-11dc-40cd-a95a-6c6603fe13bb" (UID: "875a3c0a-11dc-40cd-a95a-6c6603fe13bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:48:16 crc kubenswrapper[5028]: I1123 08:48:16.953317 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-kube-api-access-24p8k" (OuterVolumeSpecName: "kube-api-access-24p8k") pod "875a3c0a-11dc-40cd-a95a-6c6603fe13bb" (UID: "875a3c0a-11dc-40cd-a95a-6c6603fe13bb"). InnerVolumeSpecName "kube-api-access-24p8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:17 crc kubenswrapper[5028]: I1123 08:48:17.042800 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:17 crc kubenswrapper[5028]: I1123 08:48:17.042848 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24p8k\" (UniqueName: \"kubernetes.io/projected/875a3c0a-11dc-40cd-a95a-6c6603fe13bb-kube-api-access-24p8k\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:17 crc kubenswrapper[5028]: I1123 08:48:17.346492 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6349-account-create-b48mk" event={"ID":"875a3c0a-11dc-40cd-a95a-6c6603fe13bb","Type":"ContainerDied","Data":"c416c2eb348426d948d2d6d2cb46d3595fe5f8d1ffa5fbe5f69535545e09495c"} Nov 23 08:48:17 crc kubenswrapper[5028]: I1123 08:48:17.347117 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c416c2eb348426d948d2d6d2cb46d3595fe5f8d1ffa5fbe5f69535545e09495c" Nov 23 08:48:17 crc kubenswrapper[5028]: I1123 08:48:17.346576 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6349-account-create-b48mk" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.406694 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nmzgg"] Nov 23 08:48:18 crc kubenswrapper[5028]: E1123 08:48:18.407683 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f399b58f-3799-485c-8746-f4a117f83149" containerName="mariadb-database-create" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.407703 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f399b58f-3799-485c-8746-f4a117f83149" containerName="mariadb-database-create" Nov 23 08:48:18 crc kubenswrapper[5028]: E1123 08:48:18.407725 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875a3c0a-11dc-40cd-a95a-6c6603fe13bb" containerName="mariadb-account-create" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.407733 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="875a3c0a-11dc-40cd-a95a-6c6603fe13bb" containerName="mariadb-account-create" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.408132 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="875a3c0a-11dc-40cd-a95a-6c6603fe13bb" containerName="mariadb-account-create" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.408172 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f399b58f-3799-485c-8746-f4a117f83149" containerName="mariadb-database-create" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.409201 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.413535 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.413600 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.413623 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9zzjs" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.426287 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmzgg"] Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.474101 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff0efee3-edd3-49be-b488-e32c46214d32-etc-machine-id\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.474236 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-combined-ca-bundle\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.474279 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-db-sync-config-data\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.474310 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-config-data\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.474336 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhvdl\" (UniqueName: \"kubernetes.io/projected/ff0efee3-edd3-49be-b488-e32c46214d32-kube-api-access-mhvdl\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.474360 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-scripts\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.576991 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-combined-ca-bundle\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.577082 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-db-sync-config-data\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.577135 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-config-data\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.577176 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhvdl\" (UniqueName: \"kubernetes.io/projected/ff0efee3-edd3-49be-b488-e32c46214d32-kube-api-access-mhvdl\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.577212 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-scripts\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.577407 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff0efee3-edd3-49be-b488-e32c46214d32-etc-machine-id\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.577607 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff0efee3-edd3-49be-b488-e32c46214d32-etc-machine-id\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.585056 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-combined-ca-bundle\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.585845 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-config-data\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.587056 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-scripts\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.587922 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-db-sync-config-data\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.594482 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhvdl\" (UniqueName: \"kubernetes.io/projected/ff0efee3-edd3-49be-b488-e32c46214d32-kube-api-access-mhvdl\") pod \"cinder-db-sync-nmzgg\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:18 crc kubenswrapper[5028]: I1123 08:48:18.733694 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:19 crc kubenswrapper[5028]: I1123 08:48:19.253136 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nmzgg"] Nov 23 08:48:19 crc kubenswrapper[5028]: W1123 08:48:19.260567 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff0efee3_edd3_49be_b488_e32c46214d32.slice/crio-7e292d478565d04372243d2569172130713b3d005ffd6b68985e25e5032f4211 WatchSource:0}: Error finding container 7e292d478565d04372243d2569172130713b3d005ffd6b68985e25e5032f4211: Status 404 returned error can't find the container with id 7e292d478565d04372243d2569172130713b3d005ffd6b68985e25e5032f4211 Nov 23 08:48:19 crc kubenswrapper[5028]: I1123 08:48:19.368369 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmzgg" event={"ID":"ff0efee3-edd3-49be-b488-e32c46214d32","Type":"ContainerStarted","Data":"7e292d478565d04372243d2569172130713b3d005ffd6b68985e25e5032f4211"} Nov 23 08:48:39 crc kubenswrapper[5028]: E1123 08:48:39.562326 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 08:48:39 crc kubenswrapper[5028]: E1123 08:48:39.563372 5028 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 08:48:39 crc kubenswrapper[5028]: E1123 08:48:39.563577 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhvdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nmzgg_openstack(ff0efee3-edd3-49be-b488-e32c46214d32): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 08:48:39 crc kubenswrapper[5028]: E1123 08:48:39.565354 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nmzgg" podUID="ff0efee3-edd3-49be-b488-e32c46214d32" Nov 23 08:48:39 crc kubenswrapper[5028]: E1123 08:48:39.782687 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/cinder-db-sync-nmzgg" podUID="ff0efee3-edd3-49be-b488-e32c46214d32" Nov 23 08:48:51 crc kubenswrapper[5028]: I1123 08:48:51.923606 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmzgg" event={"ID":"ff0efee3-edd3-49be-b488-e32c46214d32","Type":"ContainerStarted","Data":"5679508187f4f87e15a973e7e7197521ada44005dd11eb47cb3477d71271a75e"} Nov 23 08:48:51 crc kubenswrapper[5028]: I1123 08:48:51.969247 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nmzgg" podStartSLOduration=2.944561313 podStartE2EDuration="33.969220147s" podCreationTimestamp="2025-11-23 08:48:18 +0000 UTC" firstStartedPulling="2025-11-23 08:48:19.265638457 +0000 UTC m=+7082.963043266" lastFinishedPulling="2025-11-23 08:48:50.290297301 +0000 UTC m=+7113.987702100" observedRunningTime="2025-11-23 08:48:51.956852382 +0000 UTC m=+7115.654257201" watchObservedRunningTime="2025-11-23 08:48:51.969220147 +0000 UTC m=+7115.666624926" Nov 23 08:48:53 crc kubenswrapper[5028]: I1123 08:48:53.949201 5028 generic.go:334] "Generic (PLEG): container finished" podID="ff0efee3-edd3-49be-b488-e32c46214d32" containerID="5679508187f4f87e15a973e7e7197521ada44005dd11eb47cb3477d71271a75e" exitCode=0 Nov 23 08:48:53 crc kubenswrapper[5028]: I1123 08:48:53.949273 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmzgg" event={"ID":"ff0efee3-edd3-49be-b488-e32c46214d32","Type":"ContainerDied","Data":"5679508187f4f87e15a973e7e7197521ada44005dd11eb47cb3477d71271a75e"} Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.453402 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.574774 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-scripts\") pod \"ff0efee3-edd3-49be-b488-e32c46214d32\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.574877 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhvdl\" (UniqueName: \"kubernetes.io/projected/ff0efee3-edd3-49be-b488-e32c46214d32-kube-api-access-mhvdl\") pod \"ff0efee3-edd3-49be-b488-e32c46214d32\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.575003 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-db-sync-config-data\") pod \"ff0efee3-edd3-49be-b488-e32c46214d32\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.575063 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff0efee3-edd3-49be-b488-e32c46214d32-etc-machine-id\") pod \"ff0efee3-edd3-49be-b488-e32c46214d32\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.575270 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff0efee3-edd3-49be-b488-e32c46214d32-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff0efee3-edd3-49be-b488-e32c46214d32" (UID: "ff0efee3-edd3-49be-b488-e32c46214d32"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.575408 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-config-data\") pod \"ff0efee3-edd3-49be-b488-e32c46214d32\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.575583 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-combined-ca-bundle\") pod \"ff0efee3-edd3-49be-b488-e32c46214d32\" (UID: \"ff0efee3-edd3-49be-b488-e32c46214d32\") " Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.576873 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff0efee3-edd3-49be-b488-e32c46214d32-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.585233 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-scripts" (OuterVolumeSpecName: "scripts") pod "ff0efee3-edd3-49be-b488-e32c46214d32" (UID: "ff0efee3-edd3-49be-b488-e32c46214d32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.585637 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0efee3-edd3-49be-b488-e32c46214d32-kube-api-access-mhvdl" (OuterVolumeSpecName: "kube-api-access-mhvdl") pod "ff0efee3-edd3-49be-b488-e32c46214d32" (UID: "ff0efee3-edd3-49be-b488-e32c46214d32"). InnerVolumeSpecName "kube-api-access-mhvdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.598127 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ff0efee3-edd3-49be-b488-e32c46214d32" (UID: "ff0efee3-edd3-49be-b488-e32c46214d32"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.608757 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff0efee3-edd3-49be-b488-e32c46214d32" (UID: "ff0efee3-edd3-49be-b488-e32c46214d32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.655140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-config-data" (OuterVolumeSpecName: "config-data") pod "ff0efee3-edd3-49be-b488-e32c46214d32" (UID: "ff0efee3-edd3-49be-b488-e32c46214d32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.679001 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.679059 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.679074 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhvdl\" (UniqueName: \"kubernetes.io/projected/ff0efee3-edd3-49be-b488-e32c46214d32-kube-api-access-mhvdl\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.679091 5028 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.679135 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff0efee3-edd3-49be-b488-e32c46214d32-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.977621 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nmzgg" event={"ID":"ff0efee3-edd3-49be-b488-e32c46214d32","Type":"ContainerDied","Data":"7e292d478565d04372243d2569172130713b3d005ffd6b68985e25e5032f4211"} Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.977668 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e292d478565d04372243d2569172130713b3d005ffd6b68985e25e5032f4211" Nov 23 08:48:55 crc kubenswrapper[5028]: I1123 08:48:55.977804 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nmzgg" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.305897 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb8fb6fc-pqrrq"] Nov 23 08:48:56 crc kubenswrapper[5028]: E1123 08:48:56.306814 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0efee3-edd3-49be-b488-e32c46214d32" containerName="cinder-db-sync" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.306829 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0efee3-edd3-49be-b488-e32c46214d32" containerName="cinder-db-sync" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.311400 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0efee3-edd3-49be-b488-e32c46214d32" containerName="cinder-db-sync" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.334844 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.351894 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb8fb6fc-pqrrq"] Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.394101 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-config\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.394194 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-dns-svc\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.394500 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.394528 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwjkf\" (UniqueName: \"kubernetes.io/projected/5b8d2873-13b3-410e-8597-342fc58a49fb-kube-api-access-lwjkf\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.394569 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.496740 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.496810 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwjkf\" (UniqueName: \"kubernetes.io/projected/5b8d2873-13b3-410e-8597-342fc58a49fb-kube-api-access-lwjkf\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.496874 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.496929 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-config\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.497109 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-dns-svc\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.497889 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.501499 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-config\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.502068 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.505568 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-dns-svc\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.543373 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwjkf\" (UniqueName: \"kubernetes.io/projected/5b8d2873-13b3-410e-8597-342fc58a49fb-kube-api-access-lwjkf\") pod \"dnsmasq-dns-6cb8fb6fc-pqrrq\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.618082 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.619924 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.622685 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.623541 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.624067 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.626891 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9zzjs" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.631436 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.665617 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.701648 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-scripts\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.701828 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.701936 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-logs\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.701992 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.702018 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrg4d\" (UniqueName: \"kubernetes.io/projected/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-kube-api-access-nrg4d\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.702049 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.702076 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.804756 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.805378 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-logs\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.805434 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.805480 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrg4d\" (UniqueName: \"kubernetes.io/projected/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-kube-api-access-nrg4d\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.805520 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.805553 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.805639 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-scripts\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.806082 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.806792 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-logs\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.810481 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data-custom\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.810679 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.812098 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.828509 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-scripts\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.841719 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrg4d\" (UniqueName: \"kubernetes.io/projected/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-kube-api-access-nrg4d\") pod \"cinder-api-0\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " pod="openstack/cinder-api-0" Nov 23 08:48:56 crc kubenswrapper[5028]: I1123 08:48:56.950088 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:48:57 crc kubenswrapper[5028]: I1123 08:48:57.238870 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb8fb6fc-pqrrq"] Nov 23 08:48:57 crc kubenswrapper[5028]: I1123 08:48:57.430609 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:48:58 crc kubenswrapper[5028]: I1123 08:48:58.024464 5028 generic.go:334] "Generic (PLEG): container finished" podID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerID="b35550993ec1096ab0b050a2bd0ad4f719924fd54164f6eb3277cf6da42b0814" exitCode=0 Nov 23 08:48:58 crc kubenswrapper[5028]: I1123 08:48:58.024927 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" event={"ID":"5b8d2873-13b3-410e-8597-342fc58a49fb","Type":"ContainerDied","Data":"b35550993ec1096ab0b050a2bd0ad4f719924fd54164f6eb3277cf6da42b0814"} Nov 23 08:48:58 crc kubenswrapper[5028]: I1123 08:48:58.025000 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" event={"ID":"5b8d2873-13b3-410e-8597-342fc58a49fb","Type":"ContainerStarted","Data":"dea60cc11281a657a0dc0a34bfaf044f7dc4e4ad8295954a638944a8478e374e"} Nov 23 08:48:58 crc kubenswrapper[5028]: I1123 08:48:58.027889 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee","Type":"ContainerStarted","Data":"af5f01efccb61444dc9547dda03a2f6696da9dfbb81625d7e76a02f6a040bb42"} Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.038795 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee","Type":"ContainerStarted","Data":"d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e"} Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.039343 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.039364 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee","Type":"ContainerStarted","Data":"d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f"} Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.041331 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" event={"ID":"5b8d2873-13b3-410e-8597-342fc58a49fb","Type":"ContainerStarted","Data":"6ede3c4a3084d0aa498c2a94d7175a16d7f4147b973090909ba21a8389bf8c23"} Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.041501 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.063107 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.063080112 podStartE2EDuration="3.063080112s" podCreationTimestamp="2025-11-23 08:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:59.054888241 +0000 UTC m=+7122.752293020" watchObservedRunningTime="2025-11-23 08:48:59.063080112 +0000 UTC m=+7122.760484911" Nov 23 08:48:59 crc kubenswrapper[5028]: I1123 08:48:59.081072 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" podStartSLOduration=3.081045755 podStartE2EDuration="3.081045755s" podCreationTimestamp="2025-11-23 08:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:48:59.071159791 +0000 UTC m=+7122.768564580" watchObservedRunningTime="2025-11-23 08:48:59.081045755 +0000 UTC m=+7122.778450554" Nov 23 08:49:06 crc kubenswrapper[5028]: I1123 08:49:06.668237 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:49:06 crc kubenswrapper[5028]: I1123 08:49:06.766214 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dbc79d8fc-pl86c"] Nov 23 08:49:06 crc kubenswrapper[5028]: I1123 08:49:06.766583 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerName="dnsmasq-dns" containerID="cri-o://7d4b2790266e05611c0e5dd54eb94744c877597a80f2d81a867935189d371dc3" gracePeriod=10 Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.180743 5028 generic.go:334] "Generic (PLEG): container finished" podID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerID="7d4b2790266e05611c0e5dd54eb94744c877597a80f2d81a867935189d371dc3" exitCode=0 Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.181177 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" event={"ID":"4daf4db5-130e-4432-91b6-e824cf7bedbf","Type":"ContainerDied","Data":"7d4b2790266e05611c0e5dd54eb94744c877597a80f2d81a867935189d371dc3"} Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.288045 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.480631 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-config\") pod \"4daf4db5-130e-4432-91b6-e824cf7bedbf\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.480705 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwgbw\" (UniqueName: \"kubernetes.io/projected/4daf4db5-130e-4432-91b6-e824cf7bedbf-kube-api-access-jwgbw\") pod \"4daf4db5-130e-4432-91b6-e824cf7bedbf\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.480796 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-nb\") pod \"4daf4db5-130e-4432-91b6-e824cf7bedbf\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.480891 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-sb\") pod \"4daf4db5-130e-4432-91b6-e824cf7bedbf\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.480934 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-dns-svc\") pod \"4daf4db5-130e-4432-91b6-e824cf7bedbf\" (UID: \"4daf4db5-130e-4432-91b6-e824cf7bedbf\") " Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.495198 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4daf4db5-130e-4432-91b6-e824cf7bedbf-kube-api-access-jwgbw" (OuterVolumeSpecName: "kube-api-access-jwgbw") pod "4daf4db5-130e-4432-91b6-e824cf7bedbf" (UID: "4daf4db5-130e-4432-91b6-e824cf7bedbf"). InnerVolumeSpecName "kube-api-access-jwgbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.546105 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4daf4db5-130e-4432-91b6-e824cf7bedbf" (UID: "4daf4db5-130e-4432-91b6-e824cf7bedbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.562613 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4daf4db5-130e-4432-91b6-e824cf7bedbf" (UID: "4daf4db5-130e-4432-91b6-e824cf7bedbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.563180 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-config" (OuterVolumeSpecName: "config") pod "4daf4db5-130e-4432-91b6-e824cf7bedbf" (UID: "4daf4db5-130e-4432-91b6-e824cf7bedbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.563548 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4daf4db5-130e-4432-91b6-e824cf7bedbf" (UID: "4daf4db5-130e-4432-91b6-e824cf7bedbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.583484 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.583519 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwgbw\" (UniqueName: \"kubernetes.io/projected/4daf4db5-130e-4432-91b6-e824cf7bedbf-kube-api-access-jwgbw\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.583537 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.583545 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:07 crc kubenswrapper[5028]: I1123 08:49:07.583553 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4daf4db5-130e-4432-91b6-e824cf7bedbf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.215501 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" event={"ID":"4daf4db5-130e-4432-91b6-e824cf7bedbf","Type":"ContainerDied","Data":"67b4bab42687f7445924963a12f2f621cab6e50a3c0e8a31f8492a12be0c73ff"} Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.216039 5028 scope.go:117] "RemoveContainer" containerID="7d4b2790266e05611c0e5dd54eb94744c877597a80f2d81a867935189d371dc3" Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.215897 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbc79d8fc-pl86c" Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.235278 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.235803 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" containerName="nova-cell0-conductor-conductor" containerID="cri-o://bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.252549 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.252819 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" containerName="nova-scheduler-scheduler" containerID="cri-o://23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.268259 5028 scope.go:117] "RemoveContainer" containerID="1ba2ad153e5a7376ccfc9006ebf9eaf3fdd9b5450c22965b7b117cfdb597044e" Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.277732 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.278372 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-log" containerID="cri-o://cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.281184 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-api" containerID="cri-o://6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.313268 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dbc79d8fc-pl86c"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.328069 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dbc79d8fc-pl86c"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.338361 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.339033 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3398c486-6d5f-49ac-8680-7e8e828665bd" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://81c1e9d06f0831130856219d79b415e16dccfaf2ddd76c004d3fdf9e396b23c6" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.357899 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.358284 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-log" containerID="cri-o://3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.358899 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-metadata" containerID="cri-o://2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656" gracePeriod=30 Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.361985 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:49:08 crc kubenswrapper[5028]: I1123 08:49:08.362253 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="500cdf4f-8422-4fe1-942e-6db6cbcbed60" containerName="nova-cell1-conductor-conductor" containerID="cri-o://f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1" gracePeriod=30 Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.075282 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" path="/var/lib/kubelet/pods/4daf4db5-130e-4432-91b6-e824cf7bedbf/volumes" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.154829 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.236732 5028 generic.go:334] "Generic (PLEG): container finished" podID="3398c486-6d5f-49ac-8680-7e8e828665bd" containerID="81c1e9d06f0831130856219d79b415e16dccfaf2ddd76c004d3fdf9e396b23c6" exitCode=0 Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.237780 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3398c486-6d5f-49ac-8680-7e8e828665bd","Type":"ContainerDied","Data":"81c1e9d06f0831130856219d79b415e16dccfaf2ddd76c004d3fdf9e396b23c6"} Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.240099 5028 generic.go:334] "Generic (PLEG): container finished" podID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerID="cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a" exitCode=143 Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.240171 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"169ea2a5-1ce4-459b-adc6-9bdc8b517234","Type":"ContainerDied","Data":"cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a"} Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.241615 5028 generic.go:334] "Generic (PLEG): container finished" podID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerID="3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70" exitCode=143 Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.241641 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63","Type":"ContainerDied","Data":"3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70"} Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.450243 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.540937 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lvzl\" (UniqueName: \"kubernetes.io/projected/3398c486-6d5f-49ac-8680-7e8e828665bd-kube-api-access-8lvzl\") pod \"3398c486-6d5f-49ac-8680-7e8e828665bd\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.541158 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-combined-ca-bundle\") pod \"3398c486-6d5f-49ac-8680-7e8e828665bd\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.541241 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-config-data\") pod \"3398c486-6d5f-49ac-8680-7e8e828665bd\" (UID: \"3398c486-6d5f-49ac-8680-7e8e828665bd\") " Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.554110 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3398c486-6d5f-49ac-8680-7e8e828665bd-kube-api-access-8lvzl" (OuterVolumeSpecName: "kube-api-access-8lvzl") pod "3398c486-6d5f-49ac-8680-7e8e828665bd" (UID: "3398c486-6d5f-49ac-8680-7e8e828665bd"). InnerVolumeSpecName "kube-api-access-8lvzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.575870 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3398c486-6d5f-49ac-8680-7e8e828665bd" (UID: "3398c486-6d5f-49ac-8680-7e8e828665bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.579286 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-config-data" (OuterVolumeSpecName: "config-data") pod "3398c486-6d5f-49ac-8680-7e8e828665bd" (UID: "3398c486-6d5f-49ac-8680-7e8e828665bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.609578 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.642630 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-config-data\") pod \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.643352 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-combined-ca-bundle\") pod \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.643850 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99hrd\" (UniqueName: \"kubernetes.io/projected/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-kube-api-access-99hrd\") pod \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\" (UID: \"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf\") " Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.646170 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lvzl\" (UniqueName: \"kubernetes.io/projected/3398c486-6d5f-49ac-8680-7e8e828665bd-kube-api-access-8lvzl\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.646424 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.646491 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3398c486-6d5f-49ac-8680-7e8e828665bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.650074 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-kube-api-access-99hrd" (OuterVolumeSpecName: "kube-api-access-99hrd") pod "c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" (UID: "c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf"). InnerVolumeSpecName "kube-api-access-99hrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.672873 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" (UID: "c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.679137 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-config-data" (OuterVolumeSpecName: "config-data") pod "c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" (UID: "c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.749133 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99hrd\" (UniqueName: \"kubernetes.io/projected/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-kube-api-access-99hrd\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.749171 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:09 crc kubenswrapper[5028]: I1123 08:49:09.749182 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.252225 5028 generic.go:334] "Generic (PLEG): container finished" podID="c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" containerID="bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7" exitCode=0 Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.252290 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.252378 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf","Type":"ContainerDied","Data":"bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7"} Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.252429 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf","Type":"ContainerDied","Data":"d0ef806f070fb5d313bcdbe1e5f85afbf0e9c7fbd2c37166ed08c8a9d045b93f"} Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.252454 5028 scope.go:117] "RemoveContainer" containerID="bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.255367 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3398c486-6d5f-49ac-8680-7e8e828665bd","Type":"ContainerDied","Data":"bb09e530b831528aca78881869973d0a9b5ff56cbe09d2c900aed19501e4cfce"} Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.255509 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.309163 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.340822 5028 scope.go:117] "RemoveContainer" containerID="bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7" Nov 23 08:49:10 crc kubenswrapper[5028]: E1123 08:49:10.342026 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7\": container with ID starting with bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7 not found: ID does not exist" containerID="bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.342069 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7"} err="failed to get container status \"bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7\": rpc error: code = NotFound desc = could not find container \"bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7\": container with ID starting with bf894f72da5f009e7d2d92b7f3143dcae90344b56b42b786c8d10d999db83cd7 not found: ID does not exist" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.342097 5028 scope.go:117] "RemoveContainer" containerID="81c1e9d06f0831130856219d79b415e16dccfaf2ddd76c004d3fdf9e396b23c6" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.344277 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.358462 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.368683 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.378419 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: E1123 08:49:10.378913 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerName="init" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.378933 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerName="init" Nov 23 08:49:10 crc kubenswrapper[5028]: E1123 08:49:10.378970 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerName="dnsmasq-dns" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.378977 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerName="dnsmasq-dns" Nov 23 08:49:10 crc kubenswrapper[5028]: E1123 08:49:10.378997 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3398c486-6d5f-49ac-8680-7e8e828665bd" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.379004 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3398c486-6d5f-49ac-8680-7e8e828665bd" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 08:49:10 crc kubenswrapper[5028]: E1123 08:49:10.379026 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" containerName="nova-cell0-conductor-conductor" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.379033 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" containerName="nova-cell0-conductor-conductor" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.379219 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4daf4db5-130e-4432-91b6-e824cf7bedbf" containerName="dnsmasq-dns" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.379238 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3398c486-6d5f-49ac-8680-7e8e828665bd" containerName="nova-cell1-novncproxy-novncproxy" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.379263 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" containerName="nova-cell0-conductor-conductor" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.380048 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.384383 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.386134 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.399849 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.401393 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.403202 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.410126 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.460038 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.460087 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwwr6\" (UniqueName: \"kubernetes.io/projected/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-kube-api-access-wwwr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.460149 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-786fw\" (UniqueName: \"kubernetes.io/projected/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-kube-api-access-786fw\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.460181 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.460220 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.460286 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.564465 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.564525 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwwr6\" (UniqueName: \"kubernetes.io/projected/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-kube-api-access-wwwr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.564594 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-786fw\" (UniqueName: \"kubernetes.io/projected/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-kube-api-access-786fw\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.564628 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.564662 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.564721 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.576078 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.576172 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.577474 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.582462 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.592695 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwwr6\" (UniqueName: \"kubernetes.io/projected/6d698ed1-a5a2-47f9-9fc0-430fe08a8909-kube-api-access-wwwr6\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d698ed1-a5a2-47f9-9fc0-430fe08a8909\") " pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.595665 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-786fw\" (UniqueName: \"kubernetes.io/projected/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-kube-api-access-786fw\") pod \"nova-cell0-conductor-0\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.719179 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.732601 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.733537 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.872151 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-config-data\") pod \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.872245 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nxn\" (UniqueName: \"kubernetes.io/projected/500cdf4f-8422-4fe1-942e-6db6cbcbed60-kube-api-access-q8nxn\") pod \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.872480 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-combined-ca-bundle\") pod \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\" (UID: \"500cdf4f-8422-4fe1-942e-6db6cbcbed60\") " Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.881751 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500cdf4f-8422-4fe1-942e-6db6cbcbed60-kube-api-access-q8nxn" (OuterVolumeSpecName: "kube-api-access-q8nxn") pod "500cdf4f-8422-4fe1-942e-6db6cbcbed60" (UID: "500cdf4f-8422-4fe1-942e-6db6cbcbed60"). InnerVolumeSpecName "kube-api-access-q8nxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.905435 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "500cdf4f-8422-4fe1-942e-6db6cbcbed60" (UID: "500cdf4f-8422-4fe1-942e-6db6cbcbed60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.914056 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-config-data" (OuterVolumeSpecName: "config-data") pod "500cdf4f-8422-4fe1-942e-6db6cbcbed60" (UID: "500cdf4f-8422-4fe1-942e-6db6cbcbed60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.974636 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.974670 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/500cdf4f-8422-4fe1-942e-6db6cbcbed60-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:10 crc kubenswrapper[5028]: I1123 08:49:10.974682 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8nxn\" (UniqueName: \"kubernetes.io/projected/500cdf4f-8422-4fe1-942e-6db6cbcbed60-kube-api-access-q8nxn\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.080969 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3398c486-6d5f-49ac-8680-7e8e828665bd" path="/var/lib/kubelet/pods/3398c486-6d5f-49ac-8680-7e8e828665bd/volumes" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.081600 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf" path="/var/lib/kubelet/pods/c1443cfc-c79d-45b4-b29a-4fb12d2c3ddf/volumes" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.237850 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 08:49:11 crc kubenswrapper[5028]: W1123 08:49:11.241922 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfb8a6b_2b6b_45d8_a1ef_aba840ffc02d.slice/crio-7aa50e6be0990133f59ddf70976688178d5ed5d8f7631fd2021fb67b450e9a04 WatchSource:0}: Error finding container 7aa50e6be0990133f59ddf70976688178d5ed5d8f7631fd2021fb67b450e9a04: Status 404 returned error can't find the container with id 7aa50e6be0990133f59ddf70976688178d5ed5d8f7631fd2021fb67b450e9a04 Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.270361 5028 generic.go:334] "Generic (PLEG): container finished" podID="500cdf4f-8422-4fe1-942e-6db6cbcbed60" containerID="f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1" exitCode=0 Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.271101 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.272206 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"500cdf4f-8422-4fe1-942e-6db6cbcbed60","Type":"ContainerDied","Data":"f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1"} Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.272243 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"500cdf4f-8422-4fe1-942e-6db6cbcbed60","Type":"ContainerDied","Data":"133acae7253b9ca5c9aea3a3019841165f54791ff90e974410d8b467ed47da04"} Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.272263 5028 scope.go:117] "RemoveContainer" containerID="f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.277648 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d","Type":"ContainerStarted","Data":"7aa50e6be0990133f59ddf70976688178d5ed5d8f7631fd2021fb67b450e9a04"} Nov 23 08:49:11 crc kubenswrapper[5028]: W1123 08:49:11.396508 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d698ed1_a5a2_47f9_9fc0_430fe08a8909.slice/crio-ab2c25f80e98ea2c283cf8c3b89f610b26f749d12a3329cf4bc04ed57c0e1f45 WatchSource:0}: Error finding container ab2c25f80e98ea2c283cf8c3b89f610b26f749d12a3329cf4bc04ed57c0e1f45: Status 404 returned error can't find the container with id ab2c25f80e98ea2c283cf8c3b89f610b26f749d12a3329cf4bc04ed57c0e1f45 Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.404500 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.416377 5028 scope.go:117] "RemoveContainer" containerID="f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1" Nov 23 08:49:11 crc kubenswrapper[5028]: E1123 08:49:11.416852 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1\": container with ID starting with f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1 not found: ID does not exist" containerID="f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.416902 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1"} err="failed to get container status \"f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1\": rpc error: code = NotFound desc = could not find container \"f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1\": container with ID starting with f4f8387c3e81f4399f3ee00f71c61a46defb255266f1491b52aa52809cfed1e1 not found: ID does not exist" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.447123 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.463691 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.476335 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:49:11 crc kubenswrapper[5028]: E1123 08:49:11.476884 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500cdf4f-8422-4fe1-942e-6db6cbcbed60" containerName="nova-cell1-conductor-conductor" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.476899 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="500cdf4f-8422-4fe1-942e-6db6cbcbed60" containerName="nova-cell1-conductor-conductor" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.477174 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="500cdf4f-8422-4fe1-942e-6db6cbcbed60" containerName="nova-cell1-conductor-conductor" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.477991 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.486057 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.493141 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.519593 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": read tcp 10.217.0.2:37176->10.217.1.85:8775: read: connection reset by peer" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.519984 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": read tcp 10.217.0.2:37174->10.217.1.85:8775: read: connection reset by peer" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.586652 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.586788 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.586869 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgvm\" (UniqueName: \"kubernetes.io/projected/1665af0d-f89b-4704-95cc-4e46d2493132-kube-api-access-vtgvm\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.688100 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgvm\" (UniqueName: \"kubernetes.io/projected/1665af0d-f89b-4704-95cc-4e46d2493132-kube-api-access-vtgvm\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.688494 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.688590 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.695301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.695401 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.709143 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": read tcp 10.217.0.2:50772->10.217.1.84:8774: read: connection reset by peer" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.709148 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": read tcp 10.217.0.2:50766->10.217.1.84:8774: read: connection reset by peer" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.709747 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgvm\" (UniqueName: \"kubernetes.io/projected/1665af0d-f89b-4704-95cc-4e46d2493132-kube-api-access-vtgvm\") pod \"nova-cell1-conductor-0\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.814804 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:11 crc kubenswrapper[5028]: I1123 08:49:11.956377 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.101711 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqs8d\" (UniqueName: \"kubernetes.io/projected/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-kube-api-access-bqs8d\") pod \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.103055 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-combined-ca-bundle\") pod \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.103100 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-config-data\") pod \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.103206 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-logs\") pod \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\" (UID: \"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.107109 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-logs" (OuterVolumeSpecName: "logs") pod "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" (UID: "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.109341 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-kube-api-access-bqs8d" (OuterVolumeSpecName: "kube-api-access-bqs8d") pod "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" (UID: "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63"). InnerVolumeSpecName "kube-api-access-bqs8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.142550 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-config-data" (OuterVolumeSpecName: "config-data") pod "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" (UID: "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.159522 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" (UID: "653add63-a41e-4c9c-8fd4-ea7a5d0c2d63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.208682 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqs8d\" (UniqueName: \"kubernetes.io/projected/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-kube-api-access-bqs8d\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.208717 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.208729 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.208740 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.246123 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.247987 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.249357 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.249392 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" containerName="nova-scheduler-scheduler" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.280087 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.326215 5028 generic.go:334] "Generic (PLEG): container finished" podID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerID="6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698" exitCode=0 Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.326290 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"169ea2a5-1ce4-459b-adc6-9bdc8b517234","Type":"ContainerDied","Data":"6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.326324 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"169ea2a5-1ce4-459b-adc6-9bdc8b517234","Type":"ContainerDied","Data":"bd1dcb62f2b338cac6d2babbf294964770f8f47db0e7a93cd52a16d6ee7066d4"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.326342 5028 scope.go:117] "RemoveContainer" containerID="6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.326483 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.332074 5028 generic.go:334] "Generic (PLEG): container finished" podID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerID="2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656" exitCode=0 Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.332143 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.332276 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63","Type":"ContainerDied","Data":"2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.332322 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"653add63-a41e-4c9c-8fd4-ea7a5d0c2d63","Type":"ContainerDied","Data":"41c18766d009627e345a26e6d54ee18d15e1ca3db5a89ad6a152ef1777f563a2"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.337401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d698ed1-a5a2-47f9-9fc0-430fe08a8909","Type":"ContainerStarted","Data":"6eacad1d231d3822d798fe8ba4a1bfb7a8ffd388b8dd2832429528787907a654"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.337471 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d698ed1-a5a2-47f9-9fc0-430fe08a8909","Type":"ContainerStarted","Data":"ab2c25f80e98ea2c283cf8c3b89f610b26f749d12a3329cf4bc04ed57c0e1f45"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.352278 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d","Type":"ContainerStarted","Data":"f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af"} Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.356978 5028 scope.go:117] "RemoveContainer" containerID="cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.383466 5028 scope.go:117] "RemoveContainer" containerID="6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.384665 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698\": container with ID starting with 6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698 not found: ID does not exist" containerID="6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.384726 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698"} err="failed to get container status \"6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698\": rpc error: code = NotFound desc = could not find container \"6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698\": container with ID starting with 6daf15157a7150f791241d84a583895c69d01f2e1c26ecadfc84cb1353835698 not found: ID does not exist" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.384772 5028 scope.go:117] "RemoveContainer" containerID="cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.390402 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a\": container with ID starting with cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a not found: ID does not exist" containerID="cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.390459 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a"} err="failed to get container status \"cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a\": rpc error: code = NotFound desc = could not find container \"cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a\": container with ID starting with cb8ba93a7c3ea7cdb186320a41bfd8b3679f29302b8a11591fd755e5bca16e2a not found: ID does not exist" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.390487 5028 scope.go:117] "RemoveContainer" containerID="2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.399112 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.399089873 podStartE2EDuration="2.399089873s" podCreationTimestamp="2025-11-23 08:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:12.375380679 +0000 UTC m=+7136.072785458" watchObservedRunningTime="2025-11-23 08:49:12.399089873 +0000 UTC m=+7136.096494652" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.405709 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.420300 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-combined-ca-bundle\") pod \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.420364 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-config-data\") pod \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.420487 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl599\" (UniqueName: \"kubernetes.io/projected/169ea2a5-1ce4-459b-adc6-9bdc8b517234-kube-api-access-hl599\") pod \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.420541 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169ea2a5-1ce4-459b-adc6-9bdc8b517234-logs\") pod \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\" (UID: \"169ea2a5-1ce4-459b-adc6-9bdc8b517234\") " Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.422043 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169ea2a5-1ce4-459b-adc6-9bdc8b517234-logs" (OuterVolumeSpecName: "logs") pod "169ea2a5-1ce4-459b-adc6-9bdc8b517234" (UID: "169ea2a5-1ce4-459b-adc6-9bdc8b517234"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.429001 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169ea2a5-1ce4-459b-adc6-9bdc8b517234-kube-api-access-hl599" (OuterVolumeSpecName: "kube-api-access-hl599") pod "169ea2a5-1ce4-459b-adc6-9bdc8b517234" (UID: "169ea2a5-1ce4-459b-adc6-9bdc8b517234"). InnerVolumeSpecName "kube-api-access-hl599". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.446801 5028 scope.go:117] "RemoveContainer" containerID="3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.455077 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.464234 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-config-data" (OuterVolumeSpecName: "config-data") pod "169ea2a5-1ce4-459b-adc6-9bdc8b517234" (UID: "169ea2a5-1ce4-459b-adc6-9bdc8b517234"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.468033 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.469015 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-log" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469038 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-log" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.469055 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-api" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469063 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-api" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.469091 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-log" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469097 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-log" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.469128 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-metadata" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469135 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-metadata" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469322 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-metadata" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469349 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-api" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469361 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" containerName="nova-metadata-log" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.469381 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" containerName="nova-api-log" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.470593 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.476645 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.476621143 podStartE2EDuration="2.476621143s" podCreationTimestamp="2025-11-23 08:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:12.419837554 +0000 UTC m=+7136.117242333" watchObservedRunningTime="2025-11-23 08:49:12.476621143 +0000 UTC m=+7136.174025922" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.485435 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.489289 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.499267 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "169ea2a5-1ce4-459b-adc6-9bdc8b517234" (UID: "169ea2a5-1ce4-459b-adc6-9bdc8b517234"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.502119 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.519205 5028 scope.go:117] "RemoveContainer" containerID="2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.519781 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656\": container with ID starting with 2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656 not found: ID does not exist" containerID="2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.519879 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656"} err="failed to get container status \"2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656\": rpc error: code = NotFound desc = could not find container \"2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656\": container with ID starting with 2e979a773072830aa3e68cbc65cd8d4992efc44dd32ae6e53fdeacee49efa656 not found: ID does not exist" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.519985 5028 scope.go:117] "RemoveContainer" containerID="3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70" Nov 23 08:49:12 crc kubenswrapper[5028]: E1123 08:49:12.520282 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70\": container with ID starting with 3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70 not found: ID does not exist" containerID="3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.520366 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70"} err="failed to get container status \"3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70\": rpc error: code = NotFound desc = could not find container \"3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70\": container with ID starting with 3b15ff2b1a2c26afc0d1395acffbf4a7e65082642b4d4b1cbfeb3a39f3df9b70 not found: ID does not exist" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.523436 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.523698 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169ea2a5-1ce4-459b-adc6-9bdc8b517234-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.523783 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl599\" (UniqueName: \"kubernetes.io/projected/169ea2a5-1ce4-459b-adc6-9bdc8b517234-kube-api-access-hl599\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.523851 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169ea2a5-1ce4-459b-adc6-9bdc8b517234-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.625736 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5vw\" (UniqueName: \"kubernetes.io/projected/7dd4b501-4513-45a2-9136-aad11a7150cf-kube-api-access-hb5vw\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.626461 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dd4b501-4513-45a2-9136-aad11a7150cf-logs\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.626625 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.626731 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-config-data\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.688365 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.703028 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.715394 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.717711 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.725494 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.728413 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5vw\" (UniqueName: \"kubernetes.io/projected/7dd4b501-4513-45a2-9136-aad11a7150cf-kube-api-access-hb5vw\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.728572 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dd4b501-4513-45a2-9136-aad11a7150cf-logs\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.728652 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.728690 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-config-data\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.734938 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dd4b501-4513-45a2-9136-aad11a7150cf-logs\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.744919 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-config-data\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.745908 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.750029 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.751894 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5vw\" (UniqueName: \"kubernetes.io/projected/7dd4b501-4513-45a2-9136-aad11a7150cf-kube-api-access-hb5vw\") pod \"nova-metadata-0\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.804891 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.830560 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-config-data\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.830701 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6044e0c2-c84b-4c18-80a5-c0198d883e68-logs\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.830735 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc528\" (UniqueName: \"kubernetes.io/projected/6044e0c2-c84b-4c18-80a5-c0198d883e68-kube-api-access-fc528\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.830762 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.932356 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-config-data\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.933037 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6044e0c2-c84b-4c18-80a5-c0198d883e68-logs\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.933142 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc528\" (UniqueName: \"kubernetes.io/projected/6044e0c2-c84b-4c18-80a5-c0198d883e68-kube-api-access-fc528\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.933218 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.933852 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6044e0c2-c84b-4c18-80a5-c0198d883e68-logs\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.939774 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.939890 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-config-data\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:12 crc kubenswrapper[5028]: I1123 08:49:12.954145 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc528\" (UniqueName: \"kubernetes.io/projected/6044e0c2-c84b-4c18-80a5-c0198d883e68-kube-api-access-fc528\") pod \"nova-api-0\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " pod="openstack/nova-api-0" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.047990 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.158830 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169ea2a5-1ce4-459b-adc6-9bdc8b517234" path="/var/lib/kubelet/pods/169ea2a5-1ce4-459b-adc6-9bdc8b517234/volumes" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.159999 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500cdf4f-8422-4fe1-942e-6db6cbcbed60" path="/var/lib/kubelet/pods/500cdf4f-8422-4fe1-942e-6db6cbcbed60/volumes" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.161918 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653add63-a41e-4c9c-8fd4-ea7a5d0c2d63" path="/var/lib/kubelet/pods/653add63-a41e-4c9c-8fd4-ea7a5d0c2d63/volumes" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.294796 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.366195 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dd4b501-4513-45a2-9136-aad11a7150cf","Type":"ContainerStarted","Data":"70be0ceb8a7781c71169c30d1e287398781f0846502aa814bdc1e4b2749a3119"} Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.371712 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1665af0d-f89b-4704-95cc-4e46d2493132","Type":"ContainerStarted","Data":"a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d"} Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.371749 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1665af0d-f89b-4704-95cc-4e46d2493132","Type":"ContainerStarted","Data":"13f66c8f5306a02a498ed958b9608415c122815bada41904200aa1bbdd42b739"} Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.373105 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.390216 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.395439 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.395424395 podStartE2EDuration="2.395424395s" podCreationTimestamp="2025-11-23 08:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:13.38913826 +0000 UTC m=+7137.086543039" watchObservedRunningTime="2025-11-23 08:49:13.395424395 +0000 UTC m=+7137.092829174" Nov 23 08:49:13 crc kubenswrapper[5028]: I1123 08:49:13.610584 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.404326 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6044e0c2-c84b-4c18-80a5-c0198d883e68","Type":"ContainerStarted","Data":"3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a"} Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.404404 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6044e0c2-c84b-4c18-80a5-c0198d883e68","Type":"ContainerStarted","Data":"4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766"} Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.404461 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6044e0c2-c84b-4c18-80a5-c0198d883e68","Type":"ContainerStarted","Data":"67f7f6555fa8ce0a2240ee9fa91195f50a04a409c3fe35ec40704d9bd3fa2363"} Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.408971 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dd4b501-4513-45a2-9136-aad11a7150cf","Type":"ContainerStarted","Data":"1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39"} Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.409017 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dd4b501-4513-45a2-9136-aad11a7150cf","Type":"ContainerStarted","Data":"343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb"} Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.440845 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.440823015 podStartE2EDuration="2.440823015s" podCreationTimestamp="2025-11-23 08:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:14.431334981 +0000 UTC m=+7138.128739760" watchObservedRunningTime="2025-11-23 08:49:14.440823015 +0000 UTC m=+7138.138227794" Nov 23 08:49:14 crc kubenswrapper[5028]: I1123 08:49:14.463989 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.463965275 podStartE2EDuration="2.463965275s" podCreationTimestamp="2025-11-23 08:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:14.451941998 +0000 UTC m=+7138.149346777" watchObservedRunningTime="2025-11-23 08:49:14.463965275 +0000 UTC m=+7138.161370044" Nov 23 08:49:15 crc kubenswrapper[5028]: I1123 08:49:15.733929 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.444496 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b5f042-5da4-4b17-b14d-a0aedb34a160","Type":"ContainerDied","Data":"23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a"} Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.444561 5028 generic.go:334] "Generic (PLEG): container finished" podID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" containerID="23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a" exitCode=0 Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.445046 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d5b5f042-5da4-4b17-b14d-a0aedb34a160","Type":"ContainerDied","Data":"346004100ea50a79628effd747594df26e1b6dc375b990f341315da52cff9785"} Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.445065 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="346004100ea50a79628effd747594df26e1b6dc375b990f341315da52cff9785" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.451198 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.543791 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-combined-ca-bundle\") pod \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.544732 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-config-data\") pod \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.545179 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2bq\" (UniqueName: \"kubernetes.io/projected/d5b5f042-5da4-4b17-b14d-a0aedb34a160-kube-api-access-4r2bq\") pod \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\" (UID: \"d5b5f042-5da4-4b17-b14d-a0aedb34a160\") " Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.551713 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b5f042-5da4-4b17-b14d-a0aedb34a160-kube-api-access-4r2bq" (OuterVolumeSpecName: "kube-api-access-4r2bq") pod "d5b5f042-5da4-4b17-b14d-a0aedb34a160" (UID: "d5b5f042-5da4-4b17-b14d-a0aedb34a160"). InnerVolumeSpecName "kube-api-access-4r2bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.575661 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-config-data" (OuterVolumeSpecName: "config-data") pod "d5b5f042-5da4-4b17-b14d-a0aedb34a160" (UID: "d5b5f042-5da4-4b17-b14d-a0aedb34a160"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.576236 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5b5f042-5da4-4b17-b14d-a0aedb34a160" (UID: "d5b5f042-5da4-4b17-b14d-a0aedb34a160"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.648497 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2bq\" (UniqueName: \"kubernetes.io/projected/d5b5f042-5da4-4b17-b14d-a0aedb34a160-kube-api-access-4r2bq\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.648923 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:16 crc kubenswrapper[5028]: I1123 08:49:16.648935 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5b5f042-5da4-4b17-b14d-a0aedb34a160-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.455654 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.489624 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.514489 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.537347 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:49:17 crc kubenswrapper[5028]: E1123 08:49:17.538081 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" containerName="nova-scheduler-scheduler" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.538107 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" containerName="nova-scheduler-scheduler" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.538357 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" containerName="nova-scheduler-scheduler" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.539423 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.542986 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.553389 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.565972 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w48c\" (UniqueName: \"kubernetes.io/projected/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-kube-api-access-7w48c\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.566091 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.566935 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-config-data\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.669614 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w48c\" (UniqueName: \"kubernetes.io/projected/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-kube-api-access-7w48c\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.669708 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.669781 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-config-data\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.676459 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-config-data\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.677332 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.696135 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w48c\" (UniqueName: \"kubernetes.io/projected/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-kube-api-access-7w48c\") pod \"nova-scheduler-0\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " pod="openstack/nova-scheduler-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.808615 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.809125 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 08:49:17 crc kubenswrapper[5028]: I1123 08:49:17.866693 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 08:49:18 crc kubenswrapper[5028]: I1123 08:49:18.430618 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 08:49:18 crc kubenswrapper[5028]: I1123 08:49:18.470226 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5","Type":"ContainerStarted","Data":"24066b8ae1c1df9ab4bf1497135792cdb822dd93f9d0ebc32025870f0f2235ca"} Nov 23 08:49:19 crc kubenswrapper[5028]: I1123 08:49:19.077363 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b5f042-5da4-4b17-b14d-a0aedb34a160" path="/var/lib/kubelet/pods/d5b5f042-5da4-4b17-b14d-a0aedb34a160/volumes" Nov 23 08:49:19 crc kubenswrapper[5028]: I1123 08:49:19.487834 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5","Type":"ContainerStarted","Data":"dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb"} Nov 23 08:49:19 crc kubenswrapper[5028]: I1123 08:49:19.520859 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.520814975 podStartE2EDuration="2.520814975s" podCreationTimestamp="2025-11-23 08:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:19.513422843 +0000 UTC m=+7143.210827722" watchObservedRunningTime="2025-11-23 08:49:19.520814975 +0000 UTC m=+7143.218219824" Nov 23 08:49:20 crc kubenswrapper[5028]: I1123 08:49:20.733286 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:20 crc kubenswrapper[5028]: I1123 08:49:20.747353 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:20 crc kubenswrapper[5028]: I1123 08:49:20.753864 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 08:49:21 crc kubenswrapper[5028]: I1123 08:49:21.530321 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 23 08:49:21 crc kubenswrapper[5028]: I1123 08:49:21.859758 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 08:49:22 crc kubenswrapper[5028]: I1123 08:49:22.806152 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:49:22 crc kubenswrapper[5028]: I1123 08:49:22.806200 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 08:49:22 crc kubenswrapper[5028]: I1123 08:49:22.867604 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 08:49:23 crc kubenswrapper[5028]: I1123 08:49:23.049451 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:49:23 crc kubenswrapper[5028]: I1123 08:49:23.049511 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 08:49:23 crc kubenswrapper[5028]: I1123 08:49:23.888210 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.95:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:49:23 crc kubenswrapper[5028]: I1123 08:49:23.888401 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.95:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:49:24 crc kubenswrapper[5028]: I1123 08:49:24.132211 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.96:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:49:24 crc kubenswrapper[5028]: I1123 08:49:24.132230 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.96:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 08:49:27 crc kubenswrapper[5028]: I1123 08:49:27.867920 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 08:49:27 crc kubenswrapper[5028]: I1123 08:49:27.904620 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 08:49:28 crc kubenswrapper[5028]: I1123 08:49:28.628150 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.809072 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.809650 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.811397 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.811808 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.873089 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.875473 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.877686 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.898958 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.961833 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58kzr\" (UniqueName: \"kubernetes.io/projected/0f6422d0-cb9b-41f2-a692-fa8da466db03-kube-api-access-58kzr\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.961880 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.961928 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.962222 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6422d0-cb9b-41f2-a692-fa8da466db03-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.962561 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:32 crc kubenswrapper[5028]: I1123 08:49:32.962842 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.063866 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.063922 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.064670 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.064699 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.066615 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.066911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.067068 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58kzr\" (UniqueName: \"kubernetes.io/projected/0f6422d0-cb9b-41f2-a692-fa8da466db03-kube-api-access-58kzr\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.067113 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.067215 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.067326 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6422d0-cb9b-41f2-a692-fa8da466db03-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.067567 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6422d0-cb9b-41f2-a692-fa8da466db03-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.074388 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.075590 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.077185 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.082775 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-scripts\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.083168 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.089072 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.095071 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58kzr\" (UniqueName: \"kubernetes.io/projected/0f6422d0-cb9b-41f2-a692-fa8da466db03-kube-api-access-58kzr\") pod \"cinder-scheduler-0\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.213324 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:49:33 crc kubenswrapper[5028]: I1123 08:49:33.760594 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:34 crc kubenswrapper[5028]: I1123 08:49:34.690445 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f6422d0-cb9b-41f2-a692-fa8da466db03","Type":"ContainerStarted","Data":"2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2"} Nov 23 08:49:34 crc kubenswrapper[5028]: I1123 08:49:34.691344 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f6422d0-cb9b-41f2-a692-fa8da466db03","Type":"ContainerStarted","Data":"1524beaf653bd2efd8f096835c439da4e3b9a10b1872fb888031d6ab56f8ed62"} Nov 23 08:49:34 crc kubenswrapper[5028]: I1123 08:49:34.728489 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:49:34 crc kubenswrapper[5028]: I1123 08:49:34.728893 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api-log" containerID="cri-o://d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f" gracePeriod=30 Nov 23 08:49:34 crc kubenswrapper[5028]: I1123 08:49:34.728983 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api" containerID="cri-o://d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e" gracePeriod=30 Nov 23 08:49:35 crc kubenswrapper[5028]: I1123 08:49:35.701348 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f6422d0-cb9b-41f2-a692-fa8da466db03","Type":"ContainerStarted","Data":"2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb"} Nov 23 08:49:35 crc kubenswrapper[5028]: I1123 08:49:35.704694 5028 generic.go:334] "Generic (PLEG): container finished" podID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerID="d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f" exitCode=143 Nov 23 08:49:35 crc kubenswrapper[5028]: I1123 08:49:35.704751 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee","Type":"ContainerDied","Data":"d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f"} Nov 23 08:49:35 crc kubenswrapper[5028]: I1123 08:49:35.732098 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.443645304 podStartE2EDuration="3.732064229s" podCreationTimestamp="2025-11-23 08:49:32 +0000 UTC" firstStartedPulling="2025-11-23 08:49:33.762304949 +0000 UTC m=+7157.459709728" lastFinishedPulling="2025-11-23 08:49:34.050723874 +0000 UTC m=+7157.748128653" observedRunningTime="2025-11-23 08:49:35.721309494 +0000 UTC m=+7159.418714293" watchObservedRunningTime="2025-11-23 08:49:35.732064229 +0000 UTC m=+7159.429469008" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.119715 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.121414 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.125024 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.140891 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288151 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288234 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288294 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288318 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288358 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288397 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288576 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d8dd319-e596-4062-8aa2-9637c332a0d7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288811 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288873 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n628\" (UniqueName: \"kubernetes.io/projected/0d8dd319-e596-4062-8aa2-9637c332a0d7-kube-api-access-5n628\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.288931 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.289067 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.289147 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-run\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.289405 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.289427 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.289496 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.289566 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.391041 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.391591 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-run\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.391694 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-run\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.391766 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.391992 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.392925 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.392980 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393017 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393158 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393252 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393317 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393328 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393403 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393432 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393460 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393499 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393511 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393536 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393570 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d8dd319-e596-4062-8aa2-9637c332a0d7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393630 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-sys\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393651 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393671 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-dev\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393723 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n628\" (UniqueName: \"kubernetes.io/projected/0d8dd319-e596-4062-8aa2-9637c332a0d7-kube-api-access-5n628\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393788 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.393822 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.394104 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0d8dd319-e596-4062-8aa2-9637c332a0d7-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.402758 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.402877 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.403692 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.406492 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0d8dd319-e596-4062-8aa2-9637c332a0d7-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.406725 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d8dd319-e596-4062-8aa2-9637c332a0d7-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.422687 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n628\" (UniqueName: \"kubernetes.io/projected/0d8dd319-e596-4062-8aa2-9637c332a0d7-kube-api-access-5n628\") pod \"cinder-volume-volume1-0\" (UID: \"0d8dd319-e596-4062-8aa2-9637c332a0d7\") " pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.447353 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.819442 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.821356 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.823642 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 23 08:49:36 crc kubenswrapper[5028]: I1123 08:49:36.837355 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012195 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-run\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012256 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012322 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012358 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-scripts\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012381 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-ceph\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012424 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-config-data\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012447 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012487 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-lib-modules\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012702 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012797 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012845 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012878 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012911 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhqb\" (UniqueName: \"kubernetes.io/projected/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-kube-api-access-5zhqb\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.012990 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-dev\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.013120 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-sys\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.013178 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115279 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-lib-modules\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115394 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115502 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115566 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115585 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhqb\" (UniqueName: \"kubernetes.io/projected/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-kube-api-access-5zhqb\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115621 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-dev\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115645 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-sys\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115669 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115693 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-run\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115708 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115754 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115775 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-scripts\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115802 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-ceph\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115838 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-config-data\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.115861 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116010 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116193 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-sys\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116214 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-lib-modules\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116646 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116704 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116713 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116752 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116797 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-etc-nvme\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.116825 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-run\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.117493 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-dev\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.122964 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-ceph\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.123607 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-config-data-custom\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.123788 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-config-data\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.134413 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-scripts\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.134760 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.138121 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhqb\" (UniqueName: \"kubernetes.io/projected/5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf-kube-api-access-5zhqb\") pod \"cinder-backup-0\" (UID: \"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf\") " pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.152281 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 23 08:49:37 crc kubenswrapper[5028]: W1123 08:49:37.447811 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d8dd319_e596_4062_8aa2_9637c332a0d7.slice/crio-47e080544b2881843781db750bd07cedb9116a703e1fdf660e09335305c72479 WatchSource:0}: Error finding container 47e080544b2881843781db750bd07cedb9116a703e1fdf660e09335305c72479: Status 404 returned error can't find the container with id 47e080544b2881843781db750bd07cedb9116a703e1fdf660e09335305c72479 Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.449082 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.571594 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 23 08:49:37 crc kubenswrapper[5028]: W1123 08:49:37.575653 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0d5e1f_6ca1_49c6_b33d_407da6c3cccf.slice/crio-223869fda9c15049488a0a3516a38f7197d76086ab3f10aecd6564fe238a98a8 WatchSource:0}: Error finding container 223869fda9c15049488a0a3516a38f7197d76086ab3f10aecd6564fe238a98a8: Status 404 returned error can't find the container with id 223869fda9c15049488a0a3516a38f7197d76086ab3f10aecd6564fe238a98a8 Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.758115 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf","Type":"ContainerStarted","Data":"223869fda9c15049488a0a3516a38f7197d76086ab3f10aecd6564fe238a98a8"} Nov 23 08:49:37 crc kubenswrapper[5028]: I1123 08:49:37.760756 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0d8dd319-e596-4062-8aa2-9637c332a0d7","Type":"ContainerStarted","Data":"47e080544b2881843781db750bd07cedb9116a703e1fdf660e09335305c72479"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.057511 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.91:8776/healthcheck\": read tcp 10.217.0.2:34388->10.217.1.91:8776: read: connection reset by peer" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.214130 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.673281 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.798126 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0d8dd319-e596-4062-8aa2-9637c332a0d7","Type":"ContainerStarted","Data":"a8e0389dffa4382b5296d959027fbfb4760f46e341d100b087e48c4078a1c673"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.798211 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"0d8dd319-e596-4062-8aa2-9637c332a0d7","Type":"ContainerStarted","Data":"8e56d3421df2b7f2305eaecf0869311af4ef43d46aa84e86224d87e430bc8dca"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.811382 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf","Type":"ContainerStarted","Data":"595a3bd9188e43e45e66db7ba777e105cc769c1fe096b66032224576b6ab1122"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.811469 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf","Type":"ContainerStarted","Data":"d34cc3d15fe5cf02a5549bc2e1f9657bef3a50b2b4d7aec0f0b78d39386beceb"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.827323 5028 generic.go:334] "Generic (PLEG): container finished" podID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerID="d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e" exitCode=0 Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.827975 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.828533 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee","Type":"ContainerDied","Data":"d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.828679 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee","Type":"ContainerDied","Data":"af5f01efccb61444dc9547dda03a2f6696da9dfbb81625d7e76a02f6a040bb42"} Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.828753 5028 scope.go:117] "RemoveContainer" containerID="d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.842719 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.492410151 podStartE2EDuration="2.842694119s" podCreationTimestamp="2025-11-23 08:49:36 +0000 UTC" firstStartedPulling="2025-11-23 08:49:37.450879226 +0000 UTC m=+7161.148284005" lastFinishedPulling="2025-11-23 08:49:37.801163194 +0000 UTC m=+7161.498567973" observedRunningTime="2025-11-23 08:49:38.836102977 +0000 UTC m=+7162.533507756" watchObservedRunningTime="2025-11-23 08:49:38.842694119 +0000 UTC m=+7162.540098898" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.871547 5028 scope.go:117] "RemoveContainer" containerID="d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.873885 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-logs\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.873935 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-combined-ca-bundle\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.874076 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-scripts\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.874104 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrg4d\" (UniqueName: \"kubernetes.io/projected/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-kube-api-access-nrg4d\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.874192 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data-custom\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.874220 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.874257 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-etc-machine-id\") pod \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\" (UID: \"dff240a0-1f87-49ad-b2a7-5e3e5dba84ee\") " Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.874589 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.877100 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-logs" (OuterVolumeSpecName: "logs") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.878543 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.884003 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-scripts" (OuterVolumeSpecName: "scripts") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.885269 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.887284 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.381750565 podStartE2EDuration="2.887231906s" podCreationTimestamp="2025-11-23 08:49:36 +0000 UTC" firstStartedPulling="2025-11-23 08:49:37.579007992 +0000 UTC m=+7161.276412771" lastFinishedPulling="2025-11-23 08:49:38.084489333 +0000 UTC m=+7161.781894112" observedRunningTime="2025-11-23 08:49:38.872915024 +0000 UTC m=+7162.570319803" watchObservedRunningTime="2025-11-23 08:49:38.887231906 +0000 UTC m=+7162.584636685" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.919705 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-kube-api-access-nrg4d" (OuterVolumeSpecName: "kube-api-access-nrg4d") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "kube-api-access-nrg4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.924152 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.942008 5028 scope.go:117] "RemoveContainer" containerID="d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e" Nov 23 08:49:38 crc kubenswrapper[5028]: E1123 08:49:38.942667 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e\": container with ID starting with d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e not found: ID does not exist" containerID="d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.942695 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e"} err="failed to get container status \"d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e\": rpc error: code = NotFound desc = could not find container \"d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e\": container with ID starting with d0e5dd592e05d22e65950c4a46b790bf2b5eff80f806e313e34bc55ad6c2556e not found: ID does not exist" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.942725 5028 scope.go:117] "RemoveContainer" containerID="d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f" Nov 23 08:49:38 crc kubenswrapper[5028]: E1123 08:49:38.943179 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f\": container with ID starting with d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f not found: ID does not exist" containerID="d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.943196 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f"} err="failed to get container status \"d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f\": rpc error: code = NotFound desc = could not find container \"d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f\": container with ID starting with d496e34cca4ec8324578d12d59090dec60e3336324722c9a460a02f25bba536f not found: ID does not exist" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.946138 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data" (OuterVolumeSpecName: "config-data") pod "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" (UID: "dff240a0-1f87-49ad-b2a7-5e3e5dba84ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.979297 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.979344 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.979355 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrg4d\" (UniqueName: \"kubernetes.io/projected/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-kube-api-access-nrg4d\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.979367 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.979376 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:38 crc kubenswrapper[5028]: I1123 08:49:38.979384 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.154017 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.169156 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.179884 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:49:39 crc kubenswrapper[5028]: E1123 08:49:39.180474 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api-log" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.180510 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api-log" Nov 23 08:49:39 crc kubenswrapper[5028]: E1123 08:49:39.180539 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.180548 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.180775 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api-log" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.180793 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" containerName="cinder-api" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.182533 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184484 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184542 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xshh\" (UniqueName: \"kubernetes.io/projected/e790c829-88fb-40db-a145-c90769f04d24-kube-api-access-5xshh\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184560 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-scripts\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184611 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-config-data\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184651 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e790c829-88fb-40db-a145-c90769f04d24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184676 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-config-data-custom\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.184722 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e790c829-88fb-40db-a145-c90769f04d24-logs\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.185233 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.187857 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286266 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-config-data\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286365 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e790c829-88fb-40db-a145-c90769f04d24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286408 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-config-data-custom\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286476 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e790c829-88fb-40db-a145-c90769f04d24-logs\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286548 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286591 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xshh\" (UniqueName: \"kubernetes.io/projected/e790c829-88fb-40db-a145-c90769f04d24-kube-api-access-5xshh\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.286614 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-scripts\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.288818 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e790c829-88fb-40db-a145-c90769f04d24-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.289025 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e790c829-88fb-40db-a145-c90769f04d24-logs\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.291592 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-scripts\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.292830 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-config-data\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.293004 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-config-data-custom\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.293556 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e790c829-88fb-40db-a145-c90769f04d24-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.312162 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xshh\" (UniqueName: \"kubernetes.io/projected/e790c829-88fb-40db-a145-c90769f04d24-kube-api-access-5xshh\") pod \"cinder-api-0\" (UID: \"e790c829-88fb-40db-a145-c90769f04d24\") " pod="openstack/cinder-api-0" Nov 23 08:49:39 crc kubenswrapper[5028]: I1123 08:49:39.513695 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 23 08:49:40 crc kubenswrapper[5028]: I1123 08:49:40.107691 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 23 08:49:40 crc kubenswrapper[5028]: I1123 08:49:40.860548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e790c829-88fb-40db-a145-c90769f04d24","Type":"ContainerStarted","Data":"f0fb3c58dbd73998fbde1e0434653dba4bc48b323a1bd14b6a504e52b75a9267"} Nov 23 08:49:40 crc kubenswrapper[5028]: I1123 08:49:40.861544 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e790c829-88fb-40db-a145-c90769f04d24","Type":"ContainerStarted","Data":"cb5fc80be30c386787e58acdeb75d32589ffd70abd7f5709403d14f820e74335"} Nov 23 08:49:41 crc kubenswrapper[5028]: I1123 08:49:41.070567 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff240a0-1f87-49ad-b2a7-5e3e5dba84ee" path="/var/lib/kubelet/pods/dff240a0-1f87-49ad-b2a7-5e3e5dba84ee/volumes" Nov 23 08:49:41 crc kubenswrapper[5028]: I1123 08:49:41.450671 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:41 crc kubenswrapper[5028]: I1123 08:49:41.877563 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e790c829-88fb-40db-a145-c90769f04d24","Type":"ContainerStarted","Data":"568b7f870af921d785493033258b35c5521ccb8026c9d483e4bc857022194f80"} Nov 23 08:49:41 crc kubenswrapper[5028]: I1123 08:49:41.878998 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 23 08:49:41 crc kubenswrapper[5028]: I1123 08:49:41.909053 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.909023209 podStartE2EDuration="2.909023209s" podCreationTimestamp="2025-11-23 08:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:41.902361445 +0000 UTC m=+7165.599766224" watchObservedRunningTime="2025-11-23 08:49:41.909023209 +0000 UTC m=+7165.606427988" Nov 23 08:49:42 crc kubenswrapper[5028]: I1123 08:49:42.152988 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 23 08:49:43 crc kubenswrapper[5028]: I1123 08:49:43.429883 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 08:49:43 crc kubenswrapper[5028]: I1123 08:49:43.529609 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:43 crc kubenswrapper[5028]: I1123 08:49:43.912443 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="cinder-scheduler" containerID="cri-o://2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2" gracePeriod=30 Nov 23 08:49:43 crc kubenswrapper[5028]: I1123 08:49:43.912627 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="probe" containerID="cri-o://2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb" gracePeriod=30 Nov 23 08:49:44 crc kubenswrapper[5028]: I1123 08:49:44.927646 5028 generic.go:334] "Generic (PLEG): container finished" podID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerID="2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb" exitCode=0 Nov 23 08:49:44 crc kubenswrapper[5028]: I1123 08:49:44.927733 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f6422d0-cb9b-41f2-a692-fa8da466db03","Type":"ContainerDied","Data":"2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb"} Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.760139 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.890667 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-scripts\") pod \"0f6422d0-cb9b-41f2-a692-fa8da466db03\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.890838 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58kzr\" (UniqueName: \"kubernetes.io/projected/0f6422d0-cb9b-41f2-a692-fa8da466db03-kube-api-access-58kzr\") pod \"0f6422d0-cb9b-41f2-a692-fa8da466db03\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.890927 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data\") pod \"0f6422d0-cb9b-41f2-a692-fa8da466db03\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.891026 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-combined-ca-bundle\") pod \"0f6422d0-cb9b-41f2-a692-fa8da466db03\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.891072 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6422d0-cb9b-41f2-a692-fa8da466db03-etc-machine-id\") pod \"0f6422d0-cb9b-41f2-a692-fa8da466db03\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.891207 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data-custom\") pod \"0f6422d0-cb9b-41f2-a692-fa8da466db03\" (UID: \"0f6422d0-cb9b-41f2-a692-fa8da466db03\") " Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.891301 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f6422d0-cb9b-41f2-a692-fa8da466db03-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0f6422d0-cb9b-41f2-a692-fa8da466db03" (UID: "0f6422d0-cb9b-41f2-a692-fa8da466db03"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.891936 5028 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6422d0-cb9b-41f2-a692-fa8da466db03-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.899407 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f6422d0-cb9b-41f2-a692-fa8da466db03-kube-api-access-58kzr" (OuterVolumeSpecName: "kube-api-access-58kzr") pod "0f6422d0-cb9b-41f2-a692-fa8da466db03" (UID: "0f6422d0-cb9b-41f2-a692-fa8da466db03"). InnerVolumeSpecName "kube-api-access-58kzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.901225 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-scripts" (OuterVolumeSpecName: "scripts") pod "0f6422d0-cb9b-41f2-a692-fa8da466db03" (UID: "0f6422d0-cb9b-41f2-a692-fa8da466db03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.905032 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f6422d0-cb9b-41f2-a692-fa8da466db03" (UID: "0f6422d0-cb9b-41f2-a692-fa8da466db03"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.964087 5028 generic.go:334] "Generic (PLEG): container finished" podID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerID="2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2" exitCode=0 Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.964160 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f6422d0-cb9b-41f2-a692-fa8da466db03","Type":"ContainerDied","Data":"2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2"} Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.964202 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0f6422d0-cb9b-41f2-a692-fa8da466db03","Type":"ContainerDied","Data":"1524beaf653bd2efd8f096835c439da4e3b9a10b1872fb888031d6ab56f8ed62"} Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.964240 5028 scope.go:117] "RemoveContainer" containerID="2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.964290 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.981434 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f6422d0-cb9b-41f2-a692-fa8da466db03" (UID: "0f6422d0-cb9b-41f2-a692-fa8da466db03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.994739 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.994794 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58kzr\" (UniqueName: \"kubernetes.io/projected/0f6422d0-cb9b-41f2-a692-fa8da466db03-kube-api-access-58kzr\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.994811 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:45 crc kubenswrapper[5028]: I1123 08:49:45.994824 5028 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.025308 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data" (OuterVolumeSpecName: "config-data") pod "0f6422d0-cb9b-41f2-a692-fa8da466db03" (UID: "0f6422d0-cb9b-41f2-a692-fa8da466db03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.054782 5028 scope.go:117] "RemoveContainer" containerID="2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.087116 5028 scope.go:117] "RemoveContainer" containerID="2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb" Nov 23 08:49:46 crc kubenswrapper[5028]: E1123 08:49:46.087574 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb\": container with ID starting with 2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb not found: ID does not exist" containerID="2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.087614 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb"} err="failed to get container status \"2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb\": rpc error: code = NotFound desc = could not find container \"2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb\": container with ID starting with 2e7d19ad13bf046304e0e1be81b0afaff8c676b98d6daf765d8df0c832a8bebb not found: ID does not exist" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.087643 5028 scope.go:117] "RemoveContainer" containerID="2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2" Nov 23 08:49:46 crc kubenswrapper[5028]: E1123 08:49:46.088043 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2\": container with ID starting with 2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2 not found: ID does not exist" containerID="2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.088095 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2"} err="failed to get container status \"2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2\": rpc error: code = NotFound desc = could not find container \"2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2\": container with ID starting with 2881b7675ae0cbf86ddc39828c08f6da167d5b8c7b0bcc8ed3b57b3c274984c2 not found: ID does not exist" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.097012 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6422d0-cb9b-41f2-a692-fa8da466db03-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.296822 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.305534 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.338785 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:46 crc kubenswrapper[5028]: E1123 08:49:46.339433 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="cinder-scheduler" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.339452 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="cinder-scheduler" Nov 23 08:49:46 crc kubenswrapper[5028]: E1123 08:49:46.339507 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="probe" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.339514 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="probe" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.339727 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="cinder-scheduler" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.339748 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" containerName="probe" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.341179 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.343512 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.357620 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.505292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.505390 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.505446 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zg7\" (UniqueName: \"kubernetes.io/projected/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-kube-api-access-r8zg7\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.505620 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.506003 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.506080 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.610634 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.611126 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.611185 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8zg7\" (UniqueName: \"kubernetes.io/projected/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-kube-api-access-r8zg7\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.611242 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.611341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.611406 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.611535 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.618639 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.618819 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.619115 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.619170 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.641625 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8zg7\" (UniqueName: \"kubernetes.io/projected/1d3e2c69-0b3f-4154-a18a-c6ad665cbc58-kube-api-access-r8zg7\") pod \"cinder-scheduler-0\" (UID: \"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58\") " pod="openstack/cinder-scheduler-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.664003 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 23 08:49:46 crc kubenswrapper[5028]: I1123 08:49:46.666539 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 23 08:49:47 crc kubenswrapper[5028]: I1123 08:49:47.067503 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6422d0-cb9b-41f2-a692-fa8da466db03" path="/var/lib/kubelet/pods/0f6422d0-cb9b-41f2-a692-fa8da466db03/volumes" Nov 23 08:49:47 crc kubenswrapper[5028]: I1123 08:49:47.206797 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 23 08:49:47 crc kubenswrapper[5028]: I1123 08:49:47.389500 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 23 08:49:47 crc kubenswrapper[5028]: I1123 08:49:47.993835 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58","Type":"ContainerStarted","Data":"815df7c98c3b1b4bb93908838a4820d42c4d48f1b605878b58f873320be54633"} Nov 23 08:49:47 crc kubenswrapper[5028]: I1123 08:49:47.994278 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58","Type":"ContainerStarted","Data":"d05422efa33134506f9039ede319294b56c4d2ec0ef012e58d168b74a3bb3a47"} Nov 23 08:49:49 crc kubenswrapper[5028]: I1123 08:49:49.011351 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d3e2c69-0b3f-4154-a18a-c6ad665cbc58","Type":"ContainerStarted","Data":"8ce2fbe26af62198f8ad7ce518af9b4badd1eadfb9c30b80e9de8ce850542af8"} Nov 23 08:49:49 crc kubenswrapper[5028]: I1123 08:49:49.034394 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.03437359 podStartE2EDuration="3.03437359s" podCreationTimestamp="2025-11-23 08:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:49:49.031237263 +0000 UTC m=+7172.728642042" watchObservedRunningTime="2025-11-23 08:49:49.03437359 +0000 UTC m=+7172.731778369" Nov 23 08:49:51 crc kubenswrapper[5028]: I1123 08:49:51.417269 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 23 08:49:51 crc kubenswrapper[5028]: I1123 08:49:51.668219 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 23 08:49:57 crc kubenswrapper[5028]: I1123 08:49:57.079261 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 23 08:50:00 crc kubenswrapper[5028]: I1123 08:50:00.946315 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:50:00 crc kubenswrapper[5028]: I1123 08:50:00.946969 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:50:12 crc kubenswrapper[5028]: I1123 08:50:12.061650 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bxk6x"] Nov 23 08:50:12 crc kubenswrapper[5028]: I1123 08:50:12.072876 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-aad8-account-create-v8929"] Nov 23 08:50:12 crc kubenswrapper[5028]: I1123 08:50:12.087587 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bxk6x"] Nov 23 08:50:12 crc kubenswrapper[5028]: I1123 08:50:12.098715 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-aad8-account-create-v8929"] Nov 23 08:50:13 crc kubenswrapper[5028]: I1123 08:50:13.070969 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17dc3da-8776-40a5-a2a3-2f86ae78be13" path="/var/lib/kubelet/pods/d17dc3da-8776-40a5-a2a3-2f86ae78be13/volumes" Nov 23 08:50:13 crc kubenswrapper[5028]: I1123 08:50:13.072415 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed78771e-329e-47a4-be0e-fa85fb5eba7d" path="/var/lib/kubelet/pods/ed78771e-329e-47a4-be0e-fa85fb5eba7d/volumes" Nov 23 08:50:24 crc kubenswrapper[5028]: I1123 08:50:24.045579 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vdvks"] Nov 23 08:50:24 crc kubenswrapper[5028]: I1123 08:50:24.055161 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vdvks"] Nov 23 08:50:25 crc kubenswrapper[5028]: I1123 08:50:25.074829 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d65160-2c51-40af-88b1-bd26e76c2a42" path="/var/lib/kubelet/pods/e9d65160-2c51-40af-88b1-bd26e76c2a42/volumes" Nov 23 08:50:27 crc kubenswrapper[5028]: I1123 08:50:27.620314 5028 scope.go:117] "RemoveContainer" containerID="d7f4d40679c6d26d630b5ccad11aecb6777f98fd4715bd8e6c54e47a06c617f5" Nov 23 08:50:27 crc kubenswrapper[5028]: I1123 08:50:27.673891 5028 scope.go:117] "RemoveContainer" containerID="1ae03e844110d1f0d034ee22c23cc9bbb74ab6eb8e74b89078a11b3f04b481fb" Nov 23 08:50:27 crc kubenswrapper[5028]: I1123 08:50:27.717344 5028 scope.go:117] "RemoveContainer" containerID="2b7147cec71e05e455c361bc4834576f5448ace823a31d9cafdd91c3b1b937aa" Nov 23 08:50:30 crc kubenswrapper[5028]: I1123 08:50:30.946869 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:50:30 crc kubenswrapper[5028]: I1123 08:50:30.947465 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:50:38 crc kubenswrapper[5028]: I1123 08:50:38.046872 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lz5zn"] Nov 23 08:50:38 crc kubenswrapper[5028]: I1123 08:50:38.060751 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lz5zn"] Nov 23 08:50:39 crc kubenswrapper[5028]: I1123 08:50:39.067260 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde709e3-05f1-4d49-ab07-dc201e568476" path="/var/lib/kubelet/pods/bde709e3-05f1-4d49-ab07-dc201e568476/volumes" Nov 23 08:51:00 crc kubenswrapper[5028]: I1123 08:51:00.947032 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:51:00 crc kubenswrapper[5028]: I1123 08:51:00.948022 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:51:00 crc kubenswrapper[5028]: I1123 08:51:00.948117 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:51:00 crc kubenswrapper[5028]: I1123 08:51:00.949289 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a64b72f76a8fe768b7b1776afaa348b6635ec48477767013a700f559c89fe286"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:51:00 crc kubenswrapper[5028]: I1123 08:51:00.949401 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://a64b72f76a8fe768b7b1776afaa348b6635ec48477767013a700f559c89fe286" gracePeriod=600 Nov 23 08:51:01 crc kubenswrapper[5028]: I1123 08:51:01.899398 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="a64b72f76a8fe768b7b1776afaa348b6635ec48477767013a700f559c89fe286" exitCode=0 Nov 23 08:51:01 crc kubenswrapper[5028]: I1123 08:51:01.899489 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"a64b72f76a8fe768b7b1776afaa348b6635ec48477767013a700f559c89fe286"} Nov 23 08:51:01 crc kubenswrapper[5028]: I1123 08:51:01.900231 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5"} Nov 23 08:51:01 crc kubenswrapper[5028]: I1123 08:51:01.900332 5028 scope.go:117] "RemoveContainer" containerID="0495b322062c1d5c0e442a5fa6358cdd252a56267283000dafde9abca3457146" Nov 23 08:51:26 crc kubenswrapper[5028]: E1123 08:51:26.161015 5028 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:55078->38.102.83.145:39767: write tcp 38.102.83.145:55078->38.102.83.145:39767: write: broken pipe Nov 23 08:51:27 crc kubenswrapper[5028]: I1123 08:51:27.866222 5028 scope.go:117] "RemoveContainer" containerID="f1fa2414eb0f1977f9b52ab134d016665a4ed8fe81318f1b254bb3eaf09cfdb1" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.316642 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7455989c7c-5lq82"] Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.322833 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.328015 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.328353 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.328514 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5r7r2" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.329358 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.345434 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7455989c7c-5lq82"] Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.390387 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.390666 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-log" containerID="cri-o://b2f944fbeea1c52002d31c10e03f66511e343ca680368a88e0dc2c59ffe196e8" gracePeriod=30 Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.391026 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-httpd" containerID="cri-o://1ec8ecd06481fb58c076cba73ae5d15e7ecec9e86447d4e45f6931f14eae8da9" gracePeriod=30 Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.458183 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78d9fdd78f-r8mq4"] Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.460205 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.466981 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-config-data\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.467145 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-scripts\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.467194 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmw7\" (UniqueName: \"kubernetes.io/projected/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-kube-api-access-nvmw7\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.467260 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-logs\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.467292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-horizon-secret-key\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.483041 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.483448 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-log" containerID="cri-o://3b6d89f1fd322d06c31b2709dce2b71789b730e14c896c0efa9f5631fbe5f85b" gracePeriod=30 Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.483676 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-httpd" containerID="cri-o://958068f5100bc7cc0870d9bae5de4e4b428025dcecfad5e4d0c9681ea45d3fab" gracePeriod=30 Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.512844 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d9fdd78f-r8mq4"] Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.569462 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-logs\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570020 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-horizon-secret-key\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570084 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-config-data\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570146 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-scripts\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570182 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-config-data\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570243 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-scripts\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570262 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-logs\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570283 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfjdz\" (UniqueName: \"kubernetes.io/projected/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-kube-api-access-nfjdz\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570737 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmw7\" (UniqueName: \"kubernetes.io/projected/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-kube-api-access-nvmw7\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.570938 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-logs\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.571119 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-horizon-secret-key\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.571397 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-scripts\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.571819 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-config-data\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.580057 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-horizon-secret-key\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.589788 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmw7\" (UniqueName: \"kubernetes.io/projected/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-kube-api-access-nvmw7\") pod \"horizon-7455989c7c-5lq82\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.653691 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.673499 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-scripts\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.673559 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-config-data\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.673631 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfjdz\" (UniqueName: \"kubernetes.io/projected/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-kube-api-access-nfjdz\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.673666 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-logs\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.673689 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-horizon-secret-key\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.674893 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-logs\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.675234 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-scripts\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.675790 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-config-data\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.678835 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-horizon-secret-key\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.695287 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfjdz\" (UniqueName: \"kubernetes.io/projected/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-kube-api-access-nfjdz\") pod \"horizon-78d9fdd78f-r8mq4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:30 crc kubenswrapper[5028]: I1123 08:51:30.782110 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.179184 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7455989c7c-5lq82"] Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.239867 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85dc44d577-v4j2h"] Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.246601 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.259886 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85dc44d577-v4j2h"] Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.305394 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-logs\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.305459 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-scripts\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.305508 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-config-data\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.305698 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9755g\" (UniqueName: \"kubernetes.io/projected/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-kube-api-access-9755g\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.305738 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-horizon-secret-key\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.345251 5028 generic.go:334] "Generic (PLEG): container finished" podID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerID="b2f944fbeea1c52002d31c10e03f66511e343ca680368a88e0dc2c59ffe196e8" exitCode=143 Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.345304 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfb848f2-4688-4604-a536-8f8dbacd90b2","Type":"ContainerDied","Data":"b2f944fbeea1c52002d31c10e03f66511e343ca680368a88e0dc2c59ffe196e8"} Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.357420 5028 generic.go:334] "Generic (PLEG): container finished" podID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerID="3b6d89f1fd322d06c31b2709dce2b71789b730e14c896c0efa9f5631fbe5f85b" exitCode=143 Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.357472 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01","Type":"ContainerDied","Data":"3b6d89f1fd322d06c31b2709dce2b71789b730e14c896c0efa9f5631fbe5f85b"} Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.386987 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7455989c7c-5lq82"] Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.394523 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.408752 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-logs\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.408820 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-scripts\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.408862 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-config-data\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.409096 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9755g\" (UniqueName: \"kubernetes.io/projected/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-kube-api-access-9755g\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.409153 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-horizon-secret-key\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.409311 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-logs\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.409904 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-scripts\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.410550 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-config-data\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.425069 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-horizon-secret-key\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.429362 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9755g\" (UniqueName: \"kubernetes.io/projected/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-kube-api-access-9755g\") pod \"horizon-85dc44d577-v4j2h\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.501652 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78d9fdd78f-r8mq4"] Nov 23 08:51:31 crc kubenswrapper[5028]: W1123 08:51:31.503656 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97c0d8ea_6ffb_4d82_bb2b_6f7accbdfad4.slice/crio-382ff0be133d7449086020abf52384325e3a1f56c8065b07c42410f73d7c7a17 WatchSource:0}: Error finding container 382ff0be133d7449086020abf52384325e3a1f56c8065b07c42410f73d7c7a17: Status 404 returned error can't find the container with id 382ff0be133d7449086020abf52384325e3a1f56c8065b07c42410f73d7c7a17 Nov 23 08:51:31 crc kubenswrapper[5028]: I1123 08:51:31.572205 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:32 crc kubenswrapper[5028]: I1123 08:51:32.174595 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85dc44d577-v4j2h"] Nov 23 08:51:32 crc kubenswrapper[5028]: I1123 08:51:32.371918 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85dc44d577-v4j2h" event={"ID":"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597","Type":"ContainerStarted","Data":"1d864055a5616b9c466921d7c8baf420cd79254e94add29b052a6804e148b5a2"} Nov 23 08:51:32 crc kubenswrapper[5028]: I1123 08:51:32.373545 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9fdd78f-r8mq4" event={"ID":"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4","Type":"ContainerStarted","Data":"382ff0be133d7449086020abf52384325e3a1f56c8065b07c42410f73d7c7a17"} Nov 23 08:51:32 crc kubenswrapper[5028]: I1123 08:51:32.376551 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7455989c7c-5lq82" event={"ID":"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53","Type":"ContainerStarted","Data":"dafb4f535e053c639d3d38285a60ef425cfe9086e57032548c2af73b7ee86917"} Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.798795 5028 generic.go:334] "Generic (PLEG): container finished" podID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerID="1ec8ecd06481fb58c076cba73ae5d15e7ecec9e86447d4e45f6931f14eae8da9" exitCode=0 Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.799083 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfb848f2-4688-4604-a536-8f8dbacd90b2","Type":"ContainerDied","Data":"1ec8ecd06481fb58c076cba73ae5d15e7ecec9e86447d4e45f6931f14eae8da9"} Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.802650 5028 generic.go:334] "Generic (PLEG): container finished" podID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerID="958068f5100bc7cc0870d9bae5de4e4b428025dcecfad5e4d0c9681ea45d3fab" exitCode=0 Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.802701 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01","Type":"ContainerDied","Data":"958068f5100bc7cc0870d9bae5de4e4b428025dcecfad5e4d0c9681ea45d3fab"} Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.905445 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992185 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn28m\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-kube-api-access-mn28m\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992293 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-logs\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992323 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-ceph\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992353 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-scripts\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992411 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-httpd-run\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992445 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-combined-ca-bundle\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.992522 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-config-data\") pod \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\" (UID: \"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01\") " Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.994301 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-logs" (OuterVolumeSpecName: "logs") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:34 crc kubenswrapper[5028]: I1123 08:51:34.996385 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.005339 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-ceph" (OuterVolumeSpecName: "ceph") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.008274 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-kube-api-access-mn28m" (OuterVolumeSpecName: "kube-api-access-mn28m") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "kube-api-access-mn28m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.025135 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-scripts" (OuterVolumeSpecName: "scripts") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.096050 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn28m\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-kube-api-access-mn28m\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.096083 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.096094 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.096104 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.096113 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.116167 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.138526 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-config-data" (OuterVolumeSpecName: "config-data") pod "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" (UID: "bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.199739 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.200023 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.818655 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01","Type":"ContainerDied","Data":"5de6f13d429db5d9eaa9e1386bf16ba93397cb7cb75a26cb8ad84090ba3cb701"} Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.820702 5028 scope.go:117] "RemoveContainer" containerID="958068f5100bc7cc0870d9bae5de4e4b428025dcecfad5e4d0c9681ea45d3fab" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.821030 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.871661 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.901297 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.924852 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:51:35 crc kubenswrapper[5028]: E1123 08:51:35.925530 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-log" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.925552 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-log" Nov 23 08:51:35 crc kubenswrapper[5028]: E1123 08:51:35.925639 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-httpd" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.925649 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-httpd" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.926196 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-httpd" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.926227 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" containerName="glance-log" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.928056 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.931419 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 23 08:51:35 crc kubenswrapper[5028]: I1123 08:51:35.935254 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122418 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72a6f394-ec46-463f-b427-90b451766614-ceph\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122482 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-scripts\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122512 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122547 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-config-data\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122577 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64h6h\" (UniqueName: \"kubernetes.io/projected/72a6f394-ec46-463f-b427-90b451766614-kube-api-access-64h6h\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122611 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72a6f394-ec46-463f-b427-90b451766614-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.122702 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72a6f394-ec46-463f-b427-90b451766614-logs\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224561 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72a6f394-ec46-463f-b427-90b451766614-logs\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224750 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72a6f394-ec46-463f-b427-90b451766614-ceph\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224792 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-scripts\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224850 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-config-data\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224877 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64h6h\" (UniqueName: \"kubernetes.io/projected/72a6f394-ec46-463f-b427-90b451766614-kube-api-access-64h6h\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.224911 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72a6f394-ec46-463f-b427-90b451766614-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.226070 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72a6f394-ec46-463f-b427-90b451766614-logs\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.227841 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72a6f394-ec46-463f-b427-90b451766614-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.232859 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-scripts\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.233435 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72a6f394-ec46-463f-b427-90b451766614-ceph\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.233486 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-config-data\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.250066 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a6f394-ec46-463f-b427-90b451766614-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.255534 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64h6h\" (UniqueName: \"kubernetes.io/projected/72a6f394-ec46-463f-b427-90b451766614-kube-api-access-64h6h\") pod \"glance-default-internal-api-0\" (UID: \"72a6f394-ec46-463f-b427-90b451766614\") " pod="openstack/glance-default-internal-api-0" Nov 23 08:51:36 crc kubenswrapper[5028]: I1123 08:51:36.258554 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:37 crc kubenswrapper[5028]: I1123 08:51:37.065047 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01" path="/var/lib/kubelet/pods/bf4c5d6d-b9e6-4119-8d6e-8e60cc710c01/volumes" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.676545 5028 scope.go:117] "RemoveContainer" containerID="3b6d89f1fd322d06c31b2709dce2b71789b730e14c896c0efa9f5631fbe5f85b" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.768144 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.891035 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dfb848f2-4688-4604-a536-8f8dbacd90b2","Type":"ContainerDied","Data":"b6a100f5229b003d32229085fe35817e4434f27c50456f16426316d8274f4cfa"} Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.891122 5028 scope.go:117] "RemoveContainer" containerID="1ec8ecd06481fb58c076cba73ae5d15e7ecec9e86447d4e45f6931f14eae8da9" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.891189 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.929882 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-logs\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.929930 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k5zw\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-kube-api-access-6k5zw\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930062 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-httpd-run\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930164 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-config-data\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930217 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-combined-ca-bundle\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930333 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-scripts\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930403 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-ceph\") pod \"dfb848f2-4688-4604-a536-8f8dbacd90b2\" (UID: \"dfb848f2-4688-4604-a536-8f8dbacd90b2\") " Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930501 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.930521 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-logs" (OuterVolumeSpecName: "logs") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.931128 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.931157 5028 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dfb848f2-4688-4604-a536-8f8dbacd90b2-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.940225 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-scripts" (OuterVolumeSpecName: "scripts") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.940284 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-ceph" (OuterVolumeSpecName: "ceph") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.942636 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-kube-api-access-6k5zw" (OuterVolumeSpecName: "kube-api-access-6k5zw") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "kube-api-access-6k5zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.967216 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:40 crc kubenswrapper[5028]: I1123 08:51:40.994133 5028 scope.go:117] "RemoveContainer" containerID="b2f944fbeea1c52002d31c10e03f66511e343ca680368a88e0dc2c59ffe196e8" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.032831 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.032868 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.032882 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.032894 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k5zw\" (UniqueName: \"kubernetes.io/projected/dfb848f2-4688-4604-a536-8f8dbacd90b2-kube-api-access-6k5zw\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.042456 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-config-data" (OuterVolumeSpecName: "config-data") pod "dfb848f2-4688-4604-a536-8f8dbacd90b2" (UID: "dfb848f2-4688-4604-a536-8f8dbacd90b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.135176 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfb848f2-4688-4604-a536-8f8dbacd90b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.219780 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.241869 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.253151 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:51:41 crc kubenswrapper[5028]: E1123 08:51:41.253767 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-log" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.253797 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-log" Nov 23 08:51:41 crc kubenswrapper[5028]: E1123 08:51:41.253830 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-httpd" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.253850 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-httpd" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.254158 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-httpd" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.254196 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" containerName="glance-log" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.255846 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.261025 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.264926 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.390804 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 23 08:51:41 crc kubenswrapper[5028]: W1123 08:51:41.402168 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a6f394_ec46_463f_b427_90b451766614.slice/crio-d8875d78d14a891823fe30a3328d19b3c52ca7585f989fb460ca656b9f04ea2e WatchSource:0}: Error finding container d8875d78d14a891823fe30a3328d19b3c52ca7585f989fb460ca656b9f04ea2e: Status 404 returned error can't find the container with id d8875d78d14a891823fe30a3328d19b3c52ca7585f989fb460ca656b9f04ea2e Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442480 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6x7c\" (UniqueName: \"kubernetes.io/projected/c1137a64-baac-4b8d-a196-2980fa226fc6-kube-api-access-b6x7c\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442555 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442625 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c1137a64-baac-4b8d-a196-2980fa226fc6-ceph\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442704 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442757 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1137a64-baac-4b8d-a196-2980fa226fc6-logs\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442792 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1137a64-baac-4b8d-a196-2980fa226fc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.442818 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.544653 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.545303 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6x7c\" (UniqueName: \"kubernetes.io/projected/c1137a64-baac-4b8d-a196-2980fa226fc6-kube-api-access-b6x7c\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.545342 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.545385 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c1137a64-baac-4b8d-a196-2980fa226fc6-ceph\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.545440 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.545496 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1137a64-baac-4b8d-a196-2980fa226fc6-logs\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.545526 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1137a64-baac-4b8d-a196-2980fa226fc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.546161 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c1137a64-baac-4b8d-a196-2980fa226fc6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.546499 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1137a64-baac-4b8d-a196-2980fa226fc6-logs\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.552465 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c1137a64-baac-4b8d-a196-2980fa226fc6-ceph\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.552542 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.553415 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-scripts\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.556862 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1137a64-baac-4b8d-a196-2980fa226fc6-config-data\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.564584 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6x7c\" (UniqueName: \"kubernetes.io/projected/c1137a64-baac-4b8d-a196-2980fa226fc6-kube-api-access-b6x7c\") pod \"glance-default-external-api-0\" (UID: \"c1137a64-baac-4b8d-a196-2980fa226fc6\") " pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.624092 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.908553 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85dc44d577-v4j2h" event={"ID":"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597","Type":"ContainerStarted","Data":"f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.908598 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85dc44d577-v4j2h" event={"ID":"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597","Type":"ContainerStarted","Data":"bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.917626 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9fdd78f-r8mq4" event={"ID":"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4","Type":"ContainerStarted","Data":"d26efe73eda4ccbf009bd0fb97411db93e282632bdc8bab5968d4fe98ef05054"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.917703 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9fdd78f-r8mq4" event={"ID":"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4","Type":"ContainerStarted","Data":"57f7cb0aeb64df970a3b108f8650db2756bf2ef7dbf7bb8311726ec83067690f"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.921595 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"72a6f394-ec46-463f-b427-90b451766614","Type":"ContainerStarted","Data":"d8875d78d14a891823fe30a3328d19b3c52ca7585f989fb460ca656b9f04ea2e"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.934116 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7455989c7c-5lq82" event={"ID":"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53","Type":"ContainerStarted","Data":"29d996c0782cdbd9f4641460d899cdac818e741d62fbbb2cfa597adfd8b090c1"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.934496 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7455989c7c-5lq82" event={"ID":"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53","Type":"ContainerStarted","Data":"4005c1c24ae8ab2407603bc1a423c2c78ff666201a50d947ce6249698638f257"} Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.934656 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7455989c7c-5lq82" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon-log" containerID="cri-o://4005c1c24ae8ab2407603bc1a423c2c78ff666201a50d947ce6249698638f257" gracePeriod=30 Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.934781 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7455989c7c-5lq82" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon" containerID="cri-o://29d996c0782cdbd9f4641460d899cdac818e741d62fbbb2cfa597adfd8b090c1" gracePeriod=30 Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.943029 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-85dc44d577-v4j2h" podStartSLOduration=2.361579239 podStartE2EDuration="10.943004446s" podCreationTimestamp="2025-11-23 08:51:31 +0000 UTC" firstStartedPulling="2025-11-23 08:51:32.17532905 +0000 UTC m=+7275.872733829" lastFinishedPulling="2025-11-23 08:51:40.756754257 +0000 UTC m=+7284.454159036" observedRunningTime="2025-11-23 08:51:41.930680962 +0000 UTC m=+7285.628085741" watchObservedRunningTime="2025-11-23 08:51:41.943004446 +0000 UTC m=+7285.640409225" Nov 23 08:51:41 crc kubenswrapper[5028]: I1123 08:51:41.979172 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78d9fdd78f-r8mq4" podStartSLOduration=2.721391949 podStartE2EDuration="11.979140916s" podCreationTimestamp="2025-11-23 08:51:30 +0000 UTC" firstStartedPulling="2025-11-23 08:51:31.506360071 +0000 UTC m=+7275.203764850" lastFinishedPulling="2025-11-23 08:51:40.764109038 +0000 UTC m=+7284.461513817" observedRunningTime="2025-11-23 08:51:41.965832588 +0000 UTC m=+7285.663237367" watchObservedRunningTime="2025-11-23 08:51:41.979140916 +0000 UTC m=+7285.676545695" Nov 23 08:51:42 crc kubenswrapper[5028]: I1123 08:51:42.008150 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7455989c7c-5lq82" podStartSLOduration=2.646055054 podStartE2EDuration="12.00812439s" podCreationTimestamp="2025-11-23 08:51:30 +0000 UTC" firstStartedPulling="2025-11-23 08:51:31.394268861 +0000 UTC m=+7275.091673640" lastFinishedPulling="2025-11-23 08:51:40.756338197 +0000 UTC m=+7284.453742976" observedRunningTime="2025-11-23 08:51:41.995880748 +0000 UTC m=+7285.693285527" watchObservedRunningTime="2025-11-23 08:51:42.00812439 +0000 UTC m=+7285.705529159" Nov 23 08:51:42 crc kubenswrapper[5028]: I1123 08:51:42.440845 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 23 08:51:42 crc kubenswrapper[5028]: I1123 08:51:42.950369 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"72a6f394-ec46-463f-b427-90b451766614","Type":"ContainerStarted","Data":"f2fa386a5cfaadc784dad95d86e4196c609992d2defe1ae2822089bf30117965"} Nov 23 08:51:42 crc kubenswrapper[5028]: I1123 08:51:42.954155 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1137a64-baac-4b8d-a196-2980fa226fc6","Type":"ContainerStarted","Data":"84645e98ecb9f0748d40be0960c9afa4ddbc766cd5d8598dd5048a26218765f6"} Nov 23 08:51:43 crc kubenswrapper[5028]: I1123 08:51:43.071190 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfb848f2-4688-4604-a536-8f8dbacd90b2" path="/var/lib/kubelet/pods/dfb848f2-4688-4604-a536-8f8dbacd90b2/volumes" Nov 23 08:51:43 crc kubenswrapper[5028]: I1123 08:51:43.969287 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1137a64-baac-4b8d-a196-2980fa226fc6","Type":"ContainerStarted","Data":"7d957d40a5d3ce6b2593b7675782e26cd30437618d20ce0f5c0b8491cbfefa2a"} Nov 23 08:51:43 crc kubenswrapper[5028]: I1123 08:51:43.970218 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c1137a64-baac-4b8d-a196-2980fa226fc6","Type":"ContainerStarted","Data":"a776a3d7c25f13c09e887f1514c819a1aa12b9882e190861ab8b95ac85b174f5"} Nov 23 08:51:43 crc kubenswrapper[5028]: I1123 08:51:43.972789 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"72a6f394-ec46-463f-b427-90b451766614","Type":"ContainerStarted","Data":"2aa8c169cf4d2b14bffd7edbe5fbd25228e35497582adc0cbdc75e86c9bd1c6c"} Nov 23 08:51:44 crc kubenswrapper[5028]: I1123 08:51:44.002178 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.002141347 podStartE2EDuration="3.002141347s" podCreationTimestamp="2025-11-23 08:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:51:43.993164026 +0000 UTC m=+7287.690568805" watchObservedRunningTime="2025-11-23 08:51:44.002141347 +0000 UTC m=+7287.699546166" Nov 23 08:51:44 crc kubenswrapper[5028]: I1123 08:51:44.026304 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.026274971 podStartE2EDuration="9.026274971s" podCreationTimestamp="2025-11-23 08:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:51:44.022726774 +0000 UTC m=+7287.720131553" watchObservedRunningTime="2025-11-23 08:51:44.026274971 +0000 UTC m=+7287.723679750" Nov 23 08:51:46 crc kubenswrapper[5028]: I1123 08:51:46.258685 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:46 crc kubenswrapper[5028]: I1123 08:51:46.259059 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:46 crc kubenswrapper[5028]: I1123 08:51:46.294896 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:46 crc kubenswrapper[5028]: I1123 08:51:46.311059 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:47 crc kubenswrapper[5028]: I1123 08:51:47.016915 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:47 crc kubenswrapper[5028]: I1123 08:51:47.017004 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:49 crc kubenswrapper[5028]: I1123 08:51:49.253229 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:49 crc kubenswrapper[5028]: I1123 08:51:49.253562 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 08:51:49 crc kubenswrapper[5028]: I1123 08:51:49.259570 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 23 08:51:50 crc kubenswrapper[5028]: I1123 08:51:50.654835 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:51:50 crc kubenswrapper[5028]: I1123 08:51:50.784636 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:50 crc kubenswrapper[5028]: I1123 08:51:50.786242 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.572760 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.573278 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.574215 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.625448 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.625516 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.659125 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:51:51 crc kubenswrapper[5028]: I1123 08:51:51.693175 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 23 08:51:52 crc kubenswrapper[5028]: I1123 08:51:52.070616 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:51:52 crc kubenswrapper[5028]: I1123 08:51:52.070678 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 23 08:51:54 crc kubenswrapper[5028]: I1123 08:51:54.346675 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:51:54 crc kubenswrapper[5028]: I1123 08:51:54.348055 5028 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 08:51:54 crc kubenswrapper[5028]: I1123 08:51:54.387667 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 23 08:52:00 crc kubenswrapper[5028]: I1123 08:52:00.785514 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78d9fdd78f-r8mq4" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Nov 23 08:52:01 crc kubenswrapper[5028]: I1123 08:52:01.573619 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.405456 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerID="29d996c0782cdbd9f4641460d899cdac818e741d62fbbb2cfa597adfd8b090c1" exitCode=137 Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.406511 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerID="4005c1c24ae8ab2407603bc1a423c2c78ff666201a50d947ce6249698638f257" exitCode=137 Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.405665 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7455989c7c-5lq82" event={"ID":"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53","Type":"ContainerDied","Data":"29d996c0782cdbd9f4641460d899cdac818e741d62fbbb2cfa597adfd8b090c1"} Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.406585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7455989c7c-5lq82" event={"ID":"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53","Type":"ContainerDied","Data":"4005c1c24ae8ab2407603bc1a423c2c78ff666201a50d947ce6249698638f257"} Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.406606 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7455989c7c-5lq82" event={"ID":"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53","Type":"ContainerDied","Data":"dafb4f535e053c639d3d38285a60ef425cfe9086e57032548c2af73b7ee86917"} Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.406622 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafb4f535e053c639d3d38285a60ef425cfe9086e57032548c2af73b7ee86917" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.478801 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.592049 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-scripts\") pod \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.592183 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmw7\" (UniqueName: \"kubernetes.io/projected/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-kube-api-access-nvmw7\") pod \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.592237 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-horizon-secret-key\") pod \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.592337 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-logs\") pod \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.592474 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-config-data\") pod \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\" (UID: \"f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53\") " Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.593047 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-logs" (OuterVolumeSpecName: "logs") pod "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" (UID: "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.601129 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" (UID: "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.610542 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-kube-api-access-nvmw7" (OuterVolumeSpecName: "kube-api-access-nvmw7") pod "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" (UID: "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53"). InnerVolumeSpecName "kube-api-access-nvmw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.631000 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-config-data" (OuterVolumeSpecName: "config-data") pod "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" (UID: "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.649625 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ts4xl"] Nov 23 08:52:12 crc kubenswrapper[5028]: E1123 08:52:12.650354 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.650378 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon" Nov 23 08:52:12 crc kubenswrapper[5028]: E1123 08:52:12.650425 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon-log" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.650434 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon-log" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.650704 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.650741 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" containerName="horizon-log" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.652874 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.660861 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts4xl"] Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.688286 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-scripts" (OuterVolumeSpecName: "scripts") pod "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" (UID: "f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.694819 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.695768 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.695797 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmw7\" (UniqueName: \"kubernetes.io/projected/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-kube-api-access-nvmw7\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.695815 5028 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.695827 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.771815 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.798306 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-utilities\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.798372 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvn68\" (UniqueName: \"kubernetes.io/projected/ae9e5235-4bf4-4fd1-9404-c23897b0be06-kube-api-access-mvn68\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.798917 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-catalog-content\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.901237 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-utilities\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.901293 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvn68\" (UniqueName: \"kubernetes.io/projected/ae9e5235-4bf4-4fd1-9404-c23897b0be06-kube-api-access-mvn68\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.901352 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-catalog-content\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.902433 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-utilities\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.902488 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-catalog-content\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:12 crc kubenswrapper[5028]: I1123 08:52:12.926113 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvn68\" (UniqueName: \"kubernetes.io/projected/ae9e5235-4bf4-4fd1-9404-c23897b0be06-kube-api-access-mvn68\") pod \"redhat-marketplace-ts4xl\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:13 crc kubenswrapper[5028]: I1123 08:52:13.005645 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:13 crc kubenswrapper[5028]: I1123 08:52:13.420076 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7455989c7c-5lq82" Nov 23 08:52:13 crc kubenswrapper[5028]: I1123 08:52:13.442507 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:52:13 crc kubenswrapper[5028]: I1123 08:52:13.452035 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7455989c7c-5lq82"] Nov 23 08:52:13 crc kubenswrapper[5028]: I1123 08:52:13.466874 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7455989c7c-5lq82"] Nov 23 08:52:13 crc kubenswrapper[5028]: I1123 08:52:13.509139 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts4xl"] Nov 23 08:52:13 crc kubenswrapper[5028]: W1123 08:52:13.523526 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae9e5235_4bf4_4fd1_9404_c23897b0be06.slice/crio-81950396d17c112b54cf60e111b131b2e87727a79d73ba17daf87bbb95ee2dac WatchSource:0}: Error finding container 81950396d17c112b54cf60e111b131b2e87727a79d73ba17daf87bbb95ee2dac: Status 404 returned error can't find the container with id 81950396d17c112b54cf60e111b131b2e87727a79d73ba17daf87bbb95ee2dac Nov 23 08:52:14 crc kubenswrapper[5028]: I1123 08:52:14.451754 5028 generic.go:334] "Generic (PLEG): container finished" podID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerID="8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8" exitCode=0 Nov 23 08:52:14 crc kubenswrapper[5028]: I1123 08:52:14.453969 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts4xl" event={"ID":"ae9e5235-4bf4-4fd1-9404-c23897b0be06","Type":"ContainerDied","Data":"8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8"} Nov 23 08:52:14 crc kubenswrapper[5028]: I1123 08:52:14.454028 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts4xl" event={"ID":"ae9e5235-4bf4-4fd1-9404-c23897b0be06","Type":"ContainerStarted","Data":"81950396d17c112b54cf60e111b131b2e87727a79d73ba17daf87bbb95ee2dac"} Nov 23 08:52:14 crc kubenswrapper[5028]: I1123 08:52:14.868583 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:52:15 crc kubenswrapper[5028]: I1123 08:52:15.068358 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53" path="/var/lib/kubelet/pods/f5c4fd7c-5b21-4557-a1db-1ebc0ca2ee53/volumes" Nov 23 08:52:15 crc kubenswrapper[5028]: I1123 08:52:15.340850 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:52:15 crc kubenswrapper[5028]: I1123 08:52:15.408162 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d9fdd78f-r8mq4"] Nov 23 08:52:15 crc kubenswrapper[5028]: I1123 08:52:15.473582 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d9fdd78f-r8mq4" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon-log" containerID="cri-o://57f7cb0aeb64df970a3b108f8650db2756bf2ef7dbf7bb8311726ec83067690f" gracePeriod=30 Nov 23 08:52:15 crc kubenswrapper[5028]: I1123 08:52:15.474645 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78d9fdd78f-r8mq4" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" containerID="cri-o://d26efe73eda4ccbf009bd0fb97411db93e282632bdc8bab5968d4fe98ef05054" gracePeriod=30 Nov 23 08:52:16 crc kubenswrapper[5028]: I1123 08:52:16.492157 5028 generic.go:334] "Generic (PLEG): container finished" podID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerID="231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329" exitCode=0 Nov 23 08:52:16 crc kubenswrapper[5028]: I1123 08:52:16.492231 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts4xl" event={"ID":"ae9e5235-4bf4-4fd1-9404-c23897b0be06","Type":"ContainerDied","Data":"231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329"} Nov 23 08:52:17 crc kubenswrapper[5028]: I1123 08:52:17.504577 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts4xl" event={"ID":"ae9e5235-4bf4-4fd1-9404-c23897b0be06","Type":"ContainerStarted","Data":"23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3"} Nov 23 08:52:17 crc kubenswrapper[5028]: I1123 08:52:17.532290 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ts4xl" podStartSLOduration=3.095717276 podStartE2EDuration="5.532268374s" podCreationTimestamp="2025-11-23 08:52:12 +0000 UTC" firstStartedPulling="2025-11-23 08:52:14.461258509 +0000 UTC m=+7318.158663288" lastFinishedPulling="2025-11-23 08:52:16.897809617 +0000 UTC m=+7320.595214386" observedRunningTime="2025-11-23 08:52:17.523392105 +0000 UTC m=+7321.220796884" watchObservedRunningTime="2025-11-23 08:52:17.532268374 +0000 UTC m=+7321.229673143" Nov 23 08:52:19 crc kubenswrapper[5028]: I1123 08:52:19.526500 5028 generic.go:334] "Generic (PLEG): container finished" podID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerID="d26efe73eda4ccbf009bd0fb97411db93e282632bdc8bab5968d4fe98ef05054" exitCode=0 Nov 23 08:52:19 crc kubenswrapper[5028]: I1123 08:52:19.526575 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9fdd78f-r8mq4" event={"ID":"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4","Type":"ContainerDied","Data":"d26efe73eda4ccbf009bd0fb97411db93e282632bdc8bab5968d4fe98ef05054"} Nov 23 08:52:20 crc kubenswrapper[5028]: I1123 08:52:20.785780 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d9fdd78f-r8mq4" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Nov 23 08:52:23 crc kubenswrapper[5028]: I1123 08:52:23.006161 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:23 crc kubenswrapper[5028]: I1123 08:52:23.006650 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:23 crc kubenswrapper[5028]: I1123 08:52:23.082880 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:23 crc kubenswrapper[5028]: I1123 08:52:23.620614 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:23 crc kubenswrapper[5028]: I1123 08:52:23.684765 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts4xl"] Nov 23 08:52:25 crc kubenswrapper[5028]: I1123 08:52:25.590895 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ts4xl" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="registry-server" containerID="cri-o://23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3" gracePeriod=2 Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.113456 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.231933 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-utilities\") pod \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.232106 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvn68\" (UniqueName: \"kubernetes.io/projected/ae9e5235-4bf4-4fd1-9404-c23897b0be06-kube-api-access-mvn68\") pod \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.232236 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-catalog-content\") pod \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\" (UID: \"ae9e5235-4bf4-4fd1-9404-c23897b0be06\") " Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.233208 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-utilities" (OuterVolumeSpecName: "utilities") pod "ae9e5235-4bf4-4fd1-9404-c23897b0be06" (UID: "ae9e5235-4bf4-4fd1-9404-c23897b0be06"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.247297 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae9e5235-4bf4-4fd1-9404-c23897b0be06-kube-api-access-mvn68" (OuterVolumeSpecName: "kube-api-access-mvn68") pod "ae9e5235-4bf4-4fd1-9404-c23897b0be06" (UID: "ae9e5235-4bf4-4fd1-9404-c23897b0be06"). InnerVolumeSpecName "kube-api-access-mvn68". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.276258 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae9e5235-4bf4-4fd1-9404-c23897b0be06" (UID: "ae9e5235-4bf4-4fd1-9404-c23897b0be06"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.336556 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.336630 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvn68\" (UniqueName: \"kubernetes.io/projected/ae9e5235-4bf4-4fd1-9404-c23897b0be06-kube-api-access-mvn68\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.336652 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae9e5235-4bf4-4fd1-9404-c23897b0be06-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.603281 5028 generic.go:334] "Generic (PLEG): container finished" podID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerID="23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3" exitCode=0 Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.603334 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts4xl" event={"ID":"ae9e5235-4bf4-4fd1-9404-c23897b0be06","Type":"ContainerDied","Data":"23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3"} Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.603368 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts4xl" event={"ID":"ae9e5235-4bf4-4fd1-9404-c23897b0be06","Type":"ContainerDied","Data":"81950396d17c112b54cf60e111b131b2e87727a79d73ba17daf87bbb95ee2dac"} Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.603388 5028 scope.go:117] "RemoveContainer" containerID="23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.603494 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts4xl" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.628569 5028 scope.go:117] "RemoveContainer" containerID="231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.666696 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts4xl"] Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.675715 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts4xl"] Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.690408 5028 scope.go:117] "RemoveContainer" containerID="8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.725753 5028 scope.go:117] "RemoveContainer" containerID="23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3" Nov 23 08:52:26 crc kubenswrapper[5028]: E1123 08:52:26.726601 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3\": container with ID starting with 23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3 not found: ID does not exist" containerID="23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.726683 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3"} err="failed to get container status \"23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3\": rpc error: code = NotFound desc = could not find container \"23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3\": container with ID starting with 23c9032dba284fe6096ee8ad4ae73b145e29264b9da738e2c4af8554202cb4a3 not found: ID does not exist" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.726740 5028 scope.go:117] "RemoveContainer" containerID="231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329" Nov 23 08:52:26 crc kubenswrapper[5028]: E1123 08:52:26.727444 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329\": container with ID starting with 231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329 not found: ID does not exist" containerID="231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.727489 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329"} err="failed to get container status \"231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329\": rpc error: code = NotFound desc = could not find container \"231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329\": container with ID starting with 231c5d8023c690122bda48ed50249498bcc97b238b7ebb8f346927084ef05329 not found: ID does not exist" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.727517 5028 scope.go:117] "RemoveContainer" containerID="8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8" Nov 23 08:52:26 crc kubenswrapper[5028]: E1123 08:52:26.727969 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8\": container with ID starting with 8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8 not found: ID does not exist" containerID="8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8" Nov 23 08:52:26 crc kubenswrapper[5028]: I1123 08:52:26.727999 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8"} err="failed to get container status \"8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8\": rpc error: code = NotFound desc = could not find container \"8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8\": container with ID starting with 8078e250a3a6aeaf2edd232b1aece41f2eef1349038cde6b6b9e434e8317eee8 not found: ID does not exist" Nov 23 08:52:27 crc kubenswrapper[5028]: I1123 08:52:27.073380 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" path="/var/lib/kubelet/pods/ae9e5235-4bf4-4fd1-9404-c23897b0be06/volumes" Nov 23 08:52:30 crc kubenswrapper[5028]: I1123 08:52:30.785668 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d9fdd78f-r8mq4" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Nov 23 08:52:40 crc kubenswrapper[5028]: I1123 08:52:40.784664 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78d9fdd78f-r8mq4" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.104:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8080: connect: connection refused" Nov 23 08:52:40 crc kubenswrapper[5028]: I1123 08:52:40.787255 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:52:45 crc kubenswrapper[5028]: I1123 08:52:45.825817 5028 generic.go:334] "Generic (PLEG): container finished" podID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerID="57f7cb0aeb64df970a3b108f8650db2756bf2ef7dbf7bb8311726ec83067690f" exitCode=137 Nov 23 08:52:45 crc kubenswrapper[5028]: I1123 08:52:45.825909 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9fdd78f-r8mq4" event={"ID":"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4","Type":"ContainerDied","Data":"57f7cb0aeb64df970a3b108f8650db2756bf2ef7dbf7bb8311726ec83067690f"} Nov 23 08:52:45 crc kubenswrapper[5028]: I1123 08:52:45.946324 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.063025 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-horizon-secret-key\") pod \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.063152 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-logs\") pod \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.063300 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-config-data\") pod \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.063519 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-scripts\") pod \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.063634 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfjdz\" (UniqueName: \"kubernetes.io/projected/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-kube-api-access-nfjdz\") pod \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\" (UID: \"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4\") " Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.063890 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-logs" (OuterVolumeSpecName: "logs") pod "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" (UID: "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.065028 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.070327 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-kube-api-access-nfjdz" (OuterVolumeSpecName: "kube-api-access-nfjdz") pod "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" (UID: "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4"). InnerVolumeSpecName "kube-api-access-nfjdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.070409 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" (UID: "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.090288 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-config-data" (OuterVolumeSpecName: "config-data") pod "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" (UID: "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.094821 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-scripts" (OuterVolumeSpecName: "scripts") pod "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" (UID: "97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.166838 5028 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.166890 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.166900 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.166913 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfjdz\" (UniqueName: \"kubernetes.io/projected/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4-kube-api-access-nfjdz\") on node \"crc\" DevicePath \"\"" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.845296 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78d9fdd78f-r8mq4" event={"ID":"97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4","Type":"ContainerDied","Data":"382ff0be133d7449086020abf52384325e3a1f56c8065b07c42410f73d7c7a17"} Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.845396 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78d9fdd78f-r8mq4" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.845776 5028 scope.go:117] "RemoveContainer" containerID="d26efe73eda4ccbf009bd0fb97411db93e282632bdc8bab5968d4fe98ef05054" Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.885675 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78d9fdd78f-r8mq4"] Nov 23 08:52:46 crc kubenswrapper[5028]: I1123 08:52:46.894733 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78d9fdd78f-r8mq4"] Nov 23 08:52:47 crc kubenswrapper[5028]: I1123 08:52:47.003763 5028 scope.go:117] "RemoveContainer" containerID="57f7cb0aeb64df970a3b108f8650db2756bf2ef7dbf7bb8311726ec83067690f" Nov 23 08:52:47 crc kubenswrapper[5028]: I1123 08:52:47.065790 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" path="/var/lib/kubelet/pods/97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4/volumes" Nov 23 08:52:58 crc kubenswrapper[5028]: I1123 08:52:58.053558 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-dwvr9"] Nov 23 08:52:58 crc kubenswrapper[5028]: I1123 08:52:58.070412 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-dwvr9"] Nov 23 08:52:58 crc kubenswrapper[5028]: I1123 08:52:58.085909 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6a52-account-create-ztjqr"] Nov 23 08:52:58 crc kubenswrapper[5028]: I1123 08:52:58.101089 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6a52-account-create-ztjqr"] Nov 23 08:52:59 crc kubenswrapper[5028]: I1123 08:52:59.074836 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81eb5d8f-1624-4544-96ef-f6d7f9f11bc0" path="/var/lib/kubelet/pods/81eb5d8f-1624-4544-96ef-f6d7f9f11bc0/volumes" Nov 23 08:52:59 crc kubenswrapper[5028]: I1123 08:52:59.077376 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f390829a-d8ff-46e1-b5b9-28f041a8abb6" path="/var/lib/kubelet/pods/f390829a-d8ff-46e1-b5b9-28f041a8abb6/volumes" Nov 23 08:53:10 crc kubenswrapper[5028]: I1123 08:53:10.046243 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xpnns"] Nov 23 08:53:10 crc kubenswrapper[5028]: I1123 08:53:10.060180 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xpnns"] Nov 23 08:53:11 crc kubenswrapper[5028]: I1123 08:53:11.066802 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55929aa-03b3-4a01-9931-5cd72deae4c5" path="/var/lib/kubelet/pods/e55929aa-03b3-4a01-9931-5cd72deae4c5/volumes" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.121185 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2rxh"] Nov 23 08:53:12 crc kubenswrapper[5028]: E1123 08:53:12.122309 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon-log" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122328 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon-log" Nov 23 08:53:12 crc kubenswrapper[5028]: E1123 08:53:12.122371 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122377 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" Nov 23 08:53:12 crc kubenswrapper[5028]: E1123 08:53:12.122395 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="registry-server" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122401 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="registry-server" Nov 23 08:53:12 crc kubenswrapper[5028]: E1123 08:53:12.122423 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="extract-content" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122431 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="extract-content" Nov 23 08:53:12 crc kubenswrapper[5028]: E1123 08:53:12.122445 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="extract-utilities" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122452 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="extract-utilities" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122629 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon-log" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122645 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae9e5235-4bf4-4fd1-9404-c23897b0be06" containerName="registry-server" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.122653 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="97c0d8ea-6ffb-4d82-bb2b-6f7accbdfad4" containerName="horizon" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.124141 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.162118 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2rxh"] Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.231331 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nmp4\" (UniqueName: \"kubernetes.io/projected/49c0e68c-1081-4439-b45f-81c45fef1f0e-kube-api-access-9nmp4\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.231469 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-utilities\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.231517 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-catalog-content\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.334290 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-utilities\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.334377 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-catalog-content\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.334505 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nmp4\" (UniqueName: \"kubernetes.io/projected/49c0e68c-1081-4439-b45f-81c45fef1f0e-kube-api-access-9nmp4\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.335080 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-utilities\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.335159 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-catalog-content\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.373120 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nmp4\" (UniqueName: \"kubernetes.io/projected/49c0e68c-1081-4439-b45f-81c45fef1f0e-kube-api-access-9nmp4\") pod \"community-operators-l2rxh\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:12 crc kubenswrapper[5028]: I1123 08:53:12.449530 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:13 crc kubenswrapper[5028]: I1123 08:53:13.027669 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2rxh"] Nov 23 08:53:13 crc kubenswrapper[5028]: W1123 08:53:13.030736 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49c0e68c_1081_4439_b45f_81c45fef1f0e.slice/crio-c03ad63d709c13e6eebb08a26889f7f51949621ce10b055eeb46208064d17c0d WatchSource:0}: Error finding container c03ad63d709c13e6eebb08a26889f7f51949621ce10b055eeb46208064d17c0d: Status 404 returned error can't find the container with id c03ad63d709c13e6eebb08a26889f7f51949621ce10b055eeb46208064d17c0d Nov 23 08:53:13 crc kubenswrapper[5028]: I1123 08:53:13.212354 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerStarted","Data":"c03ad63d709c13e6eebb08a26889f7f51949621ce10b055eeb46208064d17c0d"} Nov 23 08:53:14 crc kubenswrapper[5028]: I1123 08:53:14.225629 5028 generic.go:334] "Generic (PLEG): container finished" podID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerID="714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640" exitCode=0 Nov 23 08:53:14 crc kubenswrapper[5028]: I1123 08:53:14.225739 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerDied","Data":"714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640"} Nov 23 08:53:15 crc kubenswrapper[5028]: I1123 08:53:15.238005 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerStarted","Data":"5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f"} Nov 23 08:53:16 crc kubenswrapper[5028]: I1123 08:53:16.261690 5028 generic.go:334] "Generic (PLEG): container finished" podID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerID="5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f" exitCode=0 Nov 23 08:53:16 crc kubenswrapper[5028]: I1123 08:53:16.262278 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerDied","Data":"5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f"} Nov 23 08:53:17 crc kubenswrapper[5028]: I1123 08:53:17.275635 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerStarted","Data":"1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9"} Nov 23 08:53:17 crc kubenswrapper[5028]: I1123 08:53:17.307894 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2rxh" podStartSLOduration=2.870826161 podStartE2EDuration="5.30784213s" podCreationTimestamp="2025-11-23 08:53:12 +0000 UTC" firstStartedPulling="2025-11-23 08:53:14.232585589 +0000 UTC m=+7377.929990368" lastFinishedPulling="2025-11-23 08:53:16.669601558 +0000 UTC m=+7380.367006337" observedRunningTime="2025-11-23 08:53:17.293853485 +0000 UTC m=+7380.991258274" watchObservedRunningTime="2025-11-23 08:53:17.30784213 +0000 UTC m=+7381.005246909" Nov 23 08:53:22 crc kubenswrapper[5028]: I1123 08:53:22.450495 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:22 crc kubenswrapper[5028]: I1123 08:53:22.450895 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:22 crc kubenswrapper[5028]: I1123 08:53:22.516554 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:22 crc kubenswrapper[5028]: I1123 08:53:22.976681 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b8b66b649-ppjcx"] Nov 23 08:53:22 crc kubenswrapper[5028]: I1123 08:53:22.980785 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.012665 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b8b66b649-ppjcx"] Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.021662 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2457f13b-dc7a-450c-b083-00edbc261f14-horizon-secret-key\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.021790 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2457f13b-dc7a-450c-b083-00edbc261f14-config-data\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.022239 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2457f13b-dc7a-450c-b083-00edbc261f14-logs\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.022268 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2457f13b-dc7a-450c-b083-00edbc261f14-scripts\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.022626 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2df8x\" (UniqueName: \"kubernetes.io/projected/2457f13b-dc7a-450c-b083-00edbc261f14-kube-api-access-2df8x\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.124652 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2df8x\" (UniqueName: \"kubernetes.io/projected/2457f13b-dc7a-450c-b083-00edbc261f14-kube-api-access-2df8x\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.124727 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2457f13b-dc7a-450c-b083-00edbc261f14-horizon-secret-key\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.124765 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2457f13b-dc7a-450c-b083-00edbc261f14-config-data\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.125351 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2457f13b-dc7a-450c-b083-00edbc261f14-logs\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.125412 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2457f13b-dc7a-450c-b083-00edbc261f14-scripts\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.126175 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2457f13b-dc7a-450c-b083-00edbc261f14-scripts\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.127884 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2457f13b-dc7a-450c-b083-00edbc261f14-logs\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.128522 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2457f13b-dc7a-450c-b083-00edbc261f14-config-data\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.143702 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2457f13b-dc7a-450c-b083-00edbc261f14-horizon-secret-key\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.153090 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2df8x\" (UniqueName: \"kubernetes.io/projected/2457f13b-dc7a-450c-b083-00edbc261f14-kube-api-access-2df8x\") pod \"horizon-7b8b66b649-ppjcx\" (UID: \"2457f13b-dc7a-450c-b083-00edbc261f14\") " pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.302435 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.408208 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.486300 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2rxh"] Nov 23 08:53:23 crc kubenswrapper[5028]: I1123 08:53:23.793171 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b8b66b649-ppjcx"] Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.359034 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8b66b649-ppjcx" event={"ID":"2457f13b-dc7a-450c-b083-00edbc261f14","Type":"ContainerStarted","Data":"96abd2d07ec57afd1dbfaf0a821f747a90607cef5962a662c4cfd58948d3799b"} Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.359590 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8b66b649-ppjcx" event={"ID":"2457f13b-dc7a-450c-b083-00edbc261f14","Type":"ContainerStarted","Data":"ae7fe4877dcf70e4c6625478e30a9383ed44977c0368d2348103bc56d99cc3f6"} Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.359611 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b8b66b649-ppjcx" event={"ID":"2457f13b-dc7a-450c-b083-00edbc261f14","Type":"ContainerStarted","Data":"3f6f48c20771d20e248da7b93a7d218544abaf75a625e1c72dae529cde0ac133"} Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.385850 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-14aa-account-create-v6nm6"] Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.387616 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.390820 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.398846 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b8b66b649-ppjcx" podStartSLOduration=2.398813693 podStartE2EDuration="2.398813693s" podCreationTimestamp="2025-11-23 08:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:53:24.39014025 +0000 UTC m=+7388.087545029" watchObservedRunningTime="2025-11-23 08:53:24.398813693 +0000 UTC m=+7388.096218472" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.440077 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-14aa-account-create-v6nm6"] Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.460296 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kd4j\" (UniqueName: \"kubernetes.io/projected/a0fc5999-6ccc-4626-a5a9-456473823758-kube-api-access-7kd4j\") pod \"heat-14aa-account-create-v6nm6\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.460566 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0fc5999-6ccc-4626-a5a9-456473823758-operator-scripts\") pod \"heat-14aa-account-create-v6nm6\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.461650 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-dwhsb"] Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.484394 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.498857 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-dwhsb"] Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.564180 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kd4j\" (UniqueName: \"kubernetes.io/projected/a0fc5999-6ccc-4626-a5a9-456473823758-kube-api-access-7kd4j\") pod \"heat-14aa-account-create-v6nm6\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.564266 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5f4\" (UniqueName: \"kubernetes.io/projected/1bffb19a-ff86-48f4-b489-ccd5611101db-kube-api-access-pt5f4\") pod \"heat-db-create-dwhsb\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.564326 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0fc5999-6ccc-4626-a5a9-456473823758-operator-scripts\") pod \"heat-14aa-account-create-v6nm6\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.564371 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffb19a-ff86-48f4-b489-ccd5611101db-operator-scripts\") pod \"heat-db-create-dwhsb\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.565717 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0fc5999-6ccc-4626-a5a9-456473823758-operator-scripts\") pod \"heat-14aa-account-create-v6nm6\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.615931 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kd4j\" (UniqueName: \"kubernetes.io/projected/a0fc5999-6ccc-4626-a5a9-456473823758-kube-api-access-7kd4j\") pod \"heat-14aa-account-create-v6nm6\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.668031 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5f4\" (UniqueName: \"kubernetes.io/projected/1bffb19a-ff86-48f4-b489-ccd5611101db-kube-api-access-pt5f4\") pod \"heat-db-create-dwhsb\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.670457 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffb19a-ff86-48f4-b489-ccd5611101db-operator-scripts\") pod \"heat-db-create-dwhsb\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.671210 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffb19a-ff86-48f4-b489-ccd5611101db-operator-scripts\") pod \"heat-db-create-dwhsb\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.686327 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5f4\" (UniqueName: \"kubernetes.io/projected/1bffb19a-ff86-48f4-b489-ccd5611101db-kube-api-access-pt5f4\") pod \"heat-db-create-dwhsb\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.722275 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:24 crc kubenswrapper[5028]: I1123 08:53:24.830048 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.202221 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-14aa-account-create-v6nm6"] Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.298705 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-dwhsb"] Nov 23 08:53:25 crc kubenswrapper[5028]: W1123 08:53:25.319649 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bffb19a_ff86_48f4_b489_ccd5611101db.slice/crio-2bc0bd8158351f8cf4344860b35323137377ad6becf853f34a9bc0a61a27bb91 WatchSource:0}: Error finding container 2bc0bd8158351f8cf4344860b35323137377ad6becf853f34a9bc0a61a27bb91: Status 404 returned error can't find the container with id 2bc0bd8158351f8cf4344860b35323137377ad6becf853f34a9bc0a61a27bb91 Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.381073 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dwhsb" event={"ID":"1bffb19a-ff86-48f4-b489-ccd5611101db","Type":"ContainerStarted","Data":"2bc0bd8158351f8cf4344860b35323137377ad6becf853f34a9bc0a61a27bb91"} Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.386148 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-14aa-account-create-v6nm6" event={"ID":"a0fc5999-6ccc-4626-a5a9-456473823758","Type":"ContainerStarted","Data":"8be11a68a1160f025a261af1e3d8aea5291acdfedb1f8bcb412bb4952ddda6c0"} Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.386234 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2rxh" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="registry-server" containerID="cri-o://1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9" gracePeriod=2 Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.811182 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.898258 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-catalog-content\") pod \"49c0e68c-1081-4439-b45f-81c45fef1f0e\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.898319 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-utilities\") pod \"49c0e68c-1081-4439-b45f-81c45fef1f0e\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.898379 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nmp4\" (UniqueName: \"kubernetes.io/projected/49c0e68c-1081-4439-b45f-81c45fef1f0e-kube-api-access-9nmp4\") pod \"49c0e68c-1081-4439-b45f-81c45fef1f0e\" (UID: \"49c0e68c-1081-4439-b45f-81c45fef1f0e\") " Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.904982 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c0e68c-1081-4439-b45f-81c45fef1f0e-kube-api-access-9nmp4" (OuterVolumeSpecName: "kube-api-access-9nmp4") pod "49c0e68c-1081-4439-b45f-81c45fef1f0e" (UID: "49c0e68c-1081-4439-b45f-81c45fef1f0e"). InnerVolumeSpecName "kube-api-access-9nmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.906720 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-utilities" (OuterVolumeSpecName: "utilities") pod "49c0e68c-1081-4439-b45f-81c45fef1f0e" (UID: "49c0e68c-1081-4439-b45f-81c45fef1f0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:53:25 crc kubenswrapper[5028]: I1123 08:53:25.955283 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49c0e68c-1081-4439-b45f-81c45fef1f0e" (UID: "49c0e68c-1081-4439-b45f-81c45fef1f0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.000728 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.000770 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49c0e68c-1081-4439-b45f-81c45fef1f0e-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.000783 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nmp4\" (UniqueName: \"kubernetes.io/projected/49c0e68c-1081-4439-b45f-81c45fef1f0e-kube-api-access-9nmp4\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.404643 5028 generic.go:334] "Generic (PLEG): container finished" podID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerID="1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9" exitCode=0 Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.404789 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2rxh" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.404773 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerDied","Data":"1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9"} Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.405014 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2rxh" event={"ID":"49c0e68c-1081-4439-b45f-81c45fef1f0e","Type":"ContainerDied","Data":"c03ad63d709c13e6eebb08a26889f7f51949621ce10b055eeb46208064d17c0d"} Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.405101 5028 scope.go:117] "RemoveContainer" containerID="1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.408744 5028 generic.go:334] "Generic (PLEG): container finished" podID="1bffb19a-ff86-48f4-b489-ccd5611101db" containerID="049fd97de01659f99a0fd81264d960ed3123eb359727cee79b871f7e5d3dc7d7" exitCode=0 Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.408862 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dwhsb" event={"ID":"1bffb19a-ff86-48f4-b489-ccd5611101db","Type":"ContainerDied","Data":"049fd97de01659f99a0fd81264d960ed3123eb359727cee79b871f7e5d3dc7d7"} Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.411612 5028 generic.go:334] "Generic (PLEG): container finished" podID="a0fc5999-6ccc-4626-a5a9-456473823758" containerID="c76459c1cc667d8da41f31d5f89d71329c1dcf212a95aef3603f714f0c42f709" exitCode=0 Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.411673 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-14aa-account-create-v6nm6" event={"ID":"a0fc5999-6ccc-4626-a5a9-456473823758","Type":"ContainerDied","Data":"c76459c1cc667d8da41f31d5f89d71329c1dcf212a95aef3603f714f0c42f709"} Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.436867 5028 scope.go:117] "RemoveContainer" containerID="5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.486542 5028 scope.go:117] "RemoveContainer" containerID="714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.513391 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2rxh"] Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.519055 5028 scope.go:117] "RemoveContainer" containerID="1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9" Nov 23 08:53:26 crc kubenswrapper[5028]: E1123 08:53:26.519837 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9\": container with ID starting with 1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9 not found: ID does not exist" containerID="1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.519895 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9"} err="failed to get container status \"1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9\": rpc error: code = NotFound desc = could not find container \"1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9\": container with ID starting with 1fd1e9682e87a56dbde6256e54d2a1829ebbffb66fd841b2d444a263fd2a48e9 not found: ID does not exist" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.519932 5028 scope.go:117] "RemoveContainer" containerID="5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f" Nov 23 08:53:26 crc kubenswrapper[5028]: E1123 08:53:26.520669 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f\": container with ID starting with 5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f not found: ID does not exist" containerID="5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.520694 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f"} err="failed to get container status \"5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f\": rpc error: code = NotFound desc = could not find container \"5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f\": container with ID starting with 5855c1b958b4cc854236099f9f9a50e532d085bcb81150ec8a5e4a7b4c58583f not found: ID does not exist" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.520708 5028 scope.go:117] "RemoveContainer" containerID="714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640" Nov 23 08:53:26 crc kubenswrapper[5028]: E1123 08:53:26.521182 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640\": container with ID starting with 714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640 not found: ID does not exist" containerID="714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.521207 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640"} err="failed to get container status \"714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640\": rpc error: code = NotFound desc = could not find container \"714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640\": container with ID starting with 714d7fffea9674eefc52c72cd99eb880d1314d2fc55a728c66867ca1b1af5640 not found: ID does not exist" Nov 23 08:53:26 crc kubenswrapper[5028]: I1123 08:53:26.526730 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2rxh"] Nov 23 08:53:27 crc kubenswrapper[5028]: I1123 08:53:27.080914 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" path="/var/lib/kubelet/pods/49c0e68c-1081-4439-b45f-81c45fef1f0e/volumes" Nov 23 08:53:27 crc kubenswrapper[5028]: I1123 08:53:27.931827 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:27 crc kubenswrapper[5028]: I1123 08:53:27.939062 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.038426 5028 scope.go:117] "RemoveContainer" containerID="8a82c1c7d81135ee9627db662020cfb566d9d4b3020d4649fb82f53566334da1" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.068106 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffb19a-ff86-48f4-b489-ccd5611101db-operator-scripts\") pod \"1bffb19a-ff86-48f4-b489-ccd5611101db\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.068490 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt5f4\" (UniqueName: \"kubernetes.io/projected/1bffb19a-ff86-48f4-b489-ccd5611101db-kube-api-access-pt5f4\") pod \"1bffb19a-ff86-48f4-b489-ccd5611101db\" (UID: \"1bffb19a-ff86-48f4-b489-ccd5611101db\") " Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.068740 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0fc5999-6ccc-4626-a5a9-456473823758-operator-scripts\") pod \"a0fc5999-6ccc-4626-a5a9-456473823758\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.068910 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kd4j\" (UniqueName: \"kubernetes.io/projected/a0fc5999-6ccc-4626-a5a9-456473823758-kube-api-access-7kd4j\") pod \"a0fc5999-6ccc-4626-a5a9-456473823758\" (UID: \"a0fc5999-6ccc-4626-a5a9-456473823758\") " Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.069898 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bffb19a-ff86-48f4-b489-ccd5611101db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bffb19a-ff86-48f4-b489-ccd5611101db" (UID: "1bffb19a-ff86-48f4-b489-ccd5611101db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.070048 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0fc5999-6ccc-4626-a5a9-456473823758-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0fc5999-6ccc-4626-a5a9-456473823758" (UID: "a0fc5999-6ccc-4626-a5a9-456473823758"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.071326 5028 scope.go:117] "RemoveContainer" containerID="450e2054e7700cc3989e5198652634753d6eeb36f40a1ce45641c0a9d5714cca" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.076325 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0fc5999-6ccc-4626-a5a9-456473823758-kube-api-access-7kd4j" (OuterVolumeSpecName: "kube-api-access-7kd4j") pod "a0fc5999-6ccc-4626-a5a9-456473823758" (UID: "a0fc5999-6ccc-4626-a5a9-456473823758"). InnerVolumeSpecName "kube-api-access-7kd4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.076710 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bffb19a-ff86-48f4-b489-ccd5611101db-kube-api-access-pt5f4" (OuterVolumeSpecName: "kube-api-access-pt5f4") pod "1bffb19a-ff86-48f4-b489-ccd5611101db" (UID: "1bffb19a-ff86-48f4-b489-ccd5611101db"). InnerVolumeSpecName "kube-api-access-pt5f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.172445 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bffb19a-ff86-48f4-b489-ccd5611101db-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.172501 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt5f4\" (UniqueName: \"kubernetes.io/projected/1bffb19a-ff86-48f4-b489-ccd5611101db-kube-api-access-pt5f4\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.172520 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0fc5999-6ccc-4626-a5a9-456473823758-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.172535 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kd4j\" (UniqueName: \"kubernetes.io/projected/a0fc5999-6ccc-4626-a5a9-456473823758-kube-api-access-7kd4j\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.190985 5028 scope.go:117] "RemoveContainer" containerID="eea468cbd55fd618afaab7488182319ee7578917e5842eaceb27f26c3dfc9c8c" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.442421 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-dwhsb" event={"ID":"1bffb19a-ff86-48f4-b489-ccd5611101db","Type":"ContainerDied","Data":"2bc0bd8158351f8cf4344860b35323137377ad6becf853f34a9bc0a61a27bb91"} Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.442475 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-dwhsb" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.442493 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc0bd8158351f8cf4344860b35323137377ad6becf853f34a9bc0a61a27bb91" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.444698 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-14aa-account-create-v6nm6" event={"ID":"a0fc5999-6ccc-4626-a5a9-456473823758","Type":"ContainerDied","Data":"8be11a68a1160f025a261af1e3d8aea5291acdfedb1f8bcb412bb4952ddda6c0"} Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.444760 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be11a68a1160f025a261af1e3d8aea5291acdfedb1f8bcb412bb4952ddda6c0" Nov 23 08:53:28 crc kubenswrapper[5028]: I1123 08:53:28.444721 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-14aa-account-create-v6nm6" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.576350 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-dcdb2"] Nov 23 08:53:29 crc kubenswrapper[5028]: E1123 08:53:29.577536 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="extract-content" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.577565 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="extract-content" Nov 23 08:53:29 crc kubenswrapper[5028]: E1123 08:53:29.577581 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0fc5999-6ccc-4626-a5a9-456473823758" containerName="mariadb-account-create" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.577591 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0fc5999-6ccc-4626-a5a9-456473823758" containerName="mariadb-account-create" Nov 23 08:53:29 crc kubenswrapper[5028]: E1123 08:53:29.577639 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bffb19a-ff86-48f4-b489-ccd5611101db" containerName="mariadb-database-create" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.577650 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bffb19a-ff86-48f4-b489-ccd5611101db" containerName="mariadb-database-create" Nov 23 08:53:29 crc kubenswrapper[5028]: E1123 08:53:29.577665 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="extract-utilities" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.577673 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="extract-utilities" Nov 23 08:53:29 crc kubenswrapper[5028]: E1123 08:53:29.577711 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="registry-server" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.577721 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="registry-server" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.577990 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0fc5999-6ccc-4626-a5a9-456473823758" containerName="mariadb-account-create" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.578013 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bffb19a-ff86-48f4-b489-ccd5611101db" containerName="mariadb-database-create" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.578026 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c0e68c-1081-4439-b45f-81c45fef1f0e" containerName="registry-server" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.579130 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.585720 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.586586 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hxhmp" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.598792 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dcdb2"] Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.715860 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4sx\" (UniqueName: \"kubernetes.io/projected/38c15f0f-729e-4d85-9357-570386dd2486-kube-api-access-4j4sx\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.715931 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-config-data\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.716079 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-combined-ca-bundle\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.817627 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4sx\" (UniqueName: \"kubernetes.io/projected/38c15f0f-729e-4d85-9357-570386dd2486-kube-api-access-4j4sx\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.817859 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-config-data\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.817985 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-combined-ca-bundle\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.832981 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-config-data\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.843414 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-combined-ca-bundle\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.844481 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4sx\" (UniqueName: \"kubernetes.io/projected/38c15f0f-729e-4d85-9357-570386dd2486-kube-api-access-4j4sx\") pod \"heat-db-sync-dcdb2\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:29 crc kubenswrapper[5028]: I1123 08:53:29.902961 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:30 crc kubenswrapper[5028]: I1123 08:53:30.462471 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dcdb2"] Nov 23 08:53:30 crc kubenswrapper[5028]: W1123 08:53:30.465756 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c15f0f_729e_4d85_9357_570386dd2486.slice/crio-c4b4fecb32ccec7d3f4fdfba6d4859b2c8f6dbb6302f3cd6ed6eb689cd35fc06 WatchSource:0}: Error finding container c4b4fecb32ccec7d3f4fdfba6d4859b2c8f6dbb6302f3cd6ed6eb689cd35fc06: Status 404 returned error can't find the container with id c4b4fecb32ccec7d3f4fdfba6d4859b2c8f6dbb6302f3cd6ed6eb689cd35fc06 Nov 23 08:53:30 crc kubenswrapper[5028]: I1123 08:53:30.946550 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:53:30 crc kubenswrapper[5028]: I1123 08:53:30.947087 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:53:31 crc kubenswrapper[5028]: I1123 08:53:31.494364 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dcdb2" event={"ID":"38c15f0f-729e-4d85-9357-570386dd2486","Type":"ContainerStarted","Data":"c4b4fecb32ccec7d3f4fdfba6d4859b2c8f6dbb6302f3cd6ed6eb689cd35fc06"} Nov 23 08:53:33 crc kubenswrapper[5028]: I1123 08:53:33.303791 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:33 crc kubenswrapper[5028]: I1123 08:53:33.304283 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:34 crc kubenswrapper[5028]: I1123 08:53:34.062452 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xr55r"] Nov 23 08:53:34 crc kubenswrapper[5028]: I1123 08:53:34.071032 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bed-account-create-q2wl9"] Nov 23 08:53:34 crc kubenswrapper[5028]: I1123 08:53:34.078905 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xr55r"] Nov 23 08:53:34 crc kubenswrapper[5028]: I1123 08:53:34.086586 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5bed-account-create-q2wl9"] Nov 23 08:53:35 crc kubenswrapper[5028]: I1123 08:53:35.064361 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9067c25f-b237-4cd9-9077-3b6aad08551b" path="/var/lib/kubelet/pods/9067c25f-b237-4cd9-9077-3b6aad08551b/volumes" Nov 23 08:53:35 crc kubenswrapper[5028]: I1123 08:53:35.065468 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5dd3b9-5a4f-4687-9776-683799bf06dd" path="/var/lib/kubelet/pods/ea5dd3b9-5a4f-4687-9776-683799bf06dd/volumes" Nov 23 08:53:40 crc kubenswrapper[5028]: I1123 08:53:40.621750 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dcdb2" event={"ID":"38c15f0f-729e-4d85-9357-570386dd2486","Type":"ContainerStarted","Data":"85ab6d2554b14739ddd50d078da884f4d100a1e70fab0322e53fd7a6b3da9d6e"} Nov 23 08:53:41 crc kubenswrapper[5028]: I1123 08:53:41.636541 5028 generic.go:334] "Generic (PLEG): container finished" podID="38c15f0f-729e-4d85-9357-570386dd2486" containerID="85ab6d2554b14739ddd50d078da884f4d100a1e70fab0322e53fd7a6b3da9d6e" exitCode=0 Nov 23 08:53:41 crc kubenswrapper[5028]: I1123 08:53:41.636720 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dcdb2" event={"ID":"38c15f0f-729e-4d85-9357-570386dd2486","Type":"ContainerDied","Data":"85ab6d2554b14739ddd50d078da884f4d100a1e70fab0322e53fd7a6b3da9d6e"} Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.179698 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.271788 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-config-data\") pod \"38c15f0f-729e-4d85-9357-570386dd2486\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.272245 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-combined-ca-bundle\") pod \"38c15f0f-729e-4d85-9357-570386dd2486\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.272303 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j4sx\" (UniqueName: \"kubernetes.io/projected/38c15f0f-729e-4d85-9357-570386dd2486-kube-api-access-4j4sx\") pod \"38c15f0f-729e-4d85-9357-570386dd2486\" (UID: \"38c15f0f-729e-4d85-9357-570386dd2486\") " Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.286276 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c15f0f-729e-4d85-9357-570386dd2486-kube-api-access-4j4sx" (OuterVolumeSpecName: "kube-api-access-4j4sx") pod "38c15f0f-729e-4d85-9357-570386dd2486" (UID: "38c15f0f-729e-4d85-9357-570386dd2486"). InnerVolumeSpecName "kube-api-access-4j4sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.305193 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b8b66b649-ppjcx" podUID="2457f13b-dc7a-450c-b083-00edbc261f14" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.110:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8080: connect: connection refused" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.324782 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38c15f0f-729e-4d85-9357-570386dd2486" (UID: "38c15f0f-729e-4d85-9357-570386dd2486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.362385 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-config-data" (OuterVolumeSpecName: "config-data") pod "38c15f0f-729e-4d85-9357-570386dd2486" (UID: "38c15f0f-729e-4d85-9357-570386dd2486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.377338 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.377388 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j4sx\" (UniqueName: \"kubernetes.io/projected/38c15f0f-729e-4d85-9357-570386dd2486-kube-api-access-4j4sx\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.377415 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c15f0f-729e-4d85-9357-570386dd2486-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.672253 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dcdb2" event={"ID":"38c15f0f-729e-4d85-9357-570386dd2486","Type":"ContainerDied","Data":"c4b4fecb32ccec7d3f4fdfba6d4859b2c8f6dbb6302f3cd6ed6eb689cd35fc06"} Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.672334 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b4fecb32ccec7d3f4fdfba6d4859b2c8f6dbb6302f3cd6ed6eb689cd35fc06" Nov 23 08:53:43 crc kubenswrapper[5028]: I1123 08:53:43.672512 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dcdb2" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.055170 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pfnxd"] Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.066178 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pfnxd"] Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.867143 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-799587747-2v7jz"] Nov 23 08:53:44 crc kubenswrapper[5028]: E1123 08:53:44.870678 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c15f0f-729e-4d85-9357-570386dd2486" containerName="heat-db-sync" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.870703 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c15f0f-729e-4d85-9357-570386dd2486" containerName="heat-db-sync" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.870936 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c15f0f-729e-4d85-9357-570386dd2486" containerName="heat-db-sync" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.871842 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.890914 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-hxhmp" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.891172 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.891304 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.918467 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-combined-ca-bundle\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.918529 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-config-data-custom\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.918579 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vng6t\" (UniqueName: \"kubernetes.io/projected/2c63a7d1-5df2-41bc-8896-942c44597e22-kube-api-access-vng6t\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.918707 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-config-data\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:44 crc kubenswrapper[5028]: I1123 08:53:44.945110 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-799587747-2v7jz"] Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.021809 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-config-data\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.021905 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-combined-ca-bundle\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.021934 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-config-data-custom\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.021990 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vng6t\" (UniqueName: \"kubernetes.io/projected/2c63a7d1-5df2-41bc-8896-942c44597e22-kube-api-access-vng6t\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.034321 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7bc7d76d4f-pp486"] Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.035919 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.039233 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.042152 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-combined-ca-bundle\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.064878 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-config-data\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.067238 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vng6t\" (UniqueName: \"kubernetes.io/projected/2c63a7d1-5df2-41bc-8896-942c44597e22-kube-api-access-vng6t\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.097361 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c63a7d1-5df2-41bc-8896-942c44597e22-config-data-custom\") pod \"heat-engine-799587747-2v7jz\" (UID: \"2c63a7d1-5df2-41bc-8896-942c44597e22\") " pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.105643 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b30b1ec-c387-488e-9175-fbc068279c73" path="/var/lib/kubelet/pods/3b30b1ec-c387-488e-9175-fbc068279c73/volumes" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.113979 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7bc7d76d4f-pp486"] Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.126663 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss9np\" (UniqueName: \"kubernetes.io/projected/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-kube-api-access-ss9np\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.126803 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-config-data\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.126870 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-config-data-custom\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.126982 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-combined-ca-bundle\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.140069 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5686cf9857-crmbj"] Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.141620 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.144001 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.165597 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5686cf9857-crmbj"] Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.229910 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-config-data\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230019 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-config-data-custom\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230089 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-combined-ca-bundle\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230117 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-config-data-custom\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230245 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-combined-ca-bundle\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230334 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss9np\" (UniqueName: \"kubernetes.io/projected/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-kube-api-access-ss9np\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230446 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvwn\" (UniqueName: \"kubernetes.io/projected/c9451837-c860-4f58-875e-1394ca0bc0fc-kube-api-access-9vvwn\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.230729 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-config-data\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.232783 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.236386 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-config-data\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.237169 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-config-data-custom\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.238467 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-combined-ca-bundle\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.259346 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss9np\" (UniqueName: \"kubernetes.io/projected/98ebe7a4-c6a0-4179-8db8-e164a706b3a6-kube-api-access-ss9np\") pod \"heat-cfnapi-7bc7d76d4f-pp486\" (UID: \"98ebe7a4-c6a0-4179-8db8-e164a706b3a6\") " pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.332226 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvwn\" (UniqueName: \"kubernetes.io/projected/c9451837-c860-4f58-875e-1394ca0bc0fc-kube-api-access-9vvwn\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.332279 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-config-data\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.332373 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-combined-ca-bundle\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.332400 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-config-data-custom\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.339145 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-combined-ca-bundle\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.342541 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-config-data-custom\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.343839 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9451837-c860-4f58-875e-1394ca0bc0fc-config-data\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.353839 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvwn\" (UniqueName: \"kubernetes.io/projected/c9451837-c860-4f58-875e-1394ca0bc0fc-kube-api-access-9vvwn\") pod \"heat-api-5686cf9857-crmbj\" (UID: \"c9451837-c860-4f58-875e-1394ca0bc0fc\") " pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.475710 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.486739 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:45 crc kubenswrapper[5028]: I1123 08:53:45.790310 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-799587747-2v7jz"] Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.071892 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7bc7d76d4f-pp486"] Nov 23 08:53:46 crc kubenswrapper[5028]: W1123 08:53:46.080416 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98ebe7a4_c6a0_4179_8db8_e164a706b3a6.slice/crio-979e4d68eae39d2d9eb94ea84f34e3a892e5face46a8780ff7cbc62c45e0a515 WatchSource:0}: Error finding container 979e4d68eae39d2d9eb94ea84f34e3a892e5face46a8780ff7cbc62c45e0a515: Status 404 returned error can't find the container with id 979e4d68eae39d2d9eb94ea84f34e3a892e5face46a8780ff7cbc62c45e0a515 Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.121253 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5686cf9857-crmbj"] Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.706183 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" event={"ID":"98ebe7a4-c6a0-4179-8db8-e164a706b3a6","Type":"ContainerStarted","Data":"979e4d68eae39d2d9eb94ea84f34e3a892e5face46a8780ff7cbc62c45e0a515"} Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.708487 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-799587747-2v7jz" event={"ID":"2c63a7d1-5df2-41bc-8896-942c44597e22","Type":"ContainerStarted","Data":"a56b44b327b1c659617deb17c7e2616fb1b9920696f40f7100de1db327409705"} Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.708521 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-799587747-2v7jz" event={"ID":"2c63a7d1-5df2-41bc-8896-942c44597e22","Type":"ContainerStarted","Data":"8b5af3fb9135f3c4ccb84db02abe4fb3d930df172e7c6dd7205d3b3ec481eb0f"} Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.708567 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.712839 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5686cf9857-crmbj" event={"ID":"c9451837-c860-4f58-875e-1394ca0bc0fc","Type":"ContainerStarted","Data":"99d2adc9d25a75027b12c80241f9ab71910d4f34190bb3f680932c17fe80891a"} Nov 23 08:53:46 crc kubenswrapper[5028]: I1123 08:53:46.730898 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-799587747-2v7jz" podStartSLOduration=2.730879824 podStartE2EDuration="2.730879824s" podCreationTimestamp="2025-11-23 08:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:53:46.725333717 +0000 UTC m=+7410.422738496" watchObservedRunningTime="2025-11-23 08:53:46.730879824 +0000 UTC m=+7410.428284603" Nov 23 08:53:48 crc kubenswrapper[5028]: I1123 08:53:48.739505 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5686cf9857-crmbj" event={"ID":"c9451837-c860-4f58-875e-1394ca0bc0fc","Type":"ContainerStarted","Data":"0bc7ff4a49b4a30bb91ff55497c73b175575184e958dc10046445e5c230414dd"} Nov 23 08:53:48 crc kubenswrapper[5028]: I1123 08:53:48.741965 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:48 crc kubenswrapper[5028]: I1123 08:53:48.742112 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" event={"ID":"98ebe7a4-c6a0-4179-8db8-e164a706b3a6","Type":"ContainerStarted","Data":"caebe687529d0fe61a90c867961bdd13610084719471566828b203956e8f2593"} Nov 23 08:53:48 crc kubenswrapper[5028]: I1123 08:53:48.742278 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:48 crc kubenswrapper[5028]: I1123 08:53:48.766571 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5686cf9857-crmbj" podStartSLOduration=1.7206634429999998 podStartE2EDuration="3.766553307s" podCreationTimestamp="2025-11-23 08:53:45 +0000 UTC" firstStartedPulling="2025-11-23 08:53:46.132112136 +0000 UTC m=+7409.829516915" lastFinishedPulling="2025-11-23 08:53:48.17800201 +0000 UTC m=+7411.875406779" observedRunningTime="2025-11-23 08:53:48.762035166 +0000 UTC m=+7412.459439945" watchObservedRunningTime="2025-11-23 08:53:48.766553307 +0000 UTC m=+7412.463958086" Nov 23 08:53:48 crc kubenswrapper[5028]: I1123 08:53:48.791671 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" podStartSLOduration=1.687519867 podStartE2EDuration="3.791648605s" podCreationTimestamp="2025-11-23 08:53:45 +0000 UTC" firstStartedPulling="2025-11-23 08:53:46.083586561 +0000 UTC m=+7409.780991340" lastFinishedPulling="2025-11-23 08:53:48.187715299 +0000 UTC m=+7411.885120078" observedRunningTime="2025-11-23 08:53:48.78534689 +0000 UTC m=+7412.482751689" watchObservedRunningTime="2025-11-23 08:53:48.791648605 +0000 UTC m=+7412.489053374" Nov 23 08:53:55 crc kubenswrapper[5028]: I1123 08:53:55.243224 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:56 crc kubenswrapper[5028]: I1123 08:53:56.794343 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7bc7d76d4f-pp486" Nov 23 08:53:56 crc kubenswrapper[5028]: I1123 08:53:56.971523 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5686cf9857-crmbj" Nov 23 08:53:57 crc kubenswrapper[5028]: I1123 08:53:57.100698 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b8b66b649-ppjcx" Nov 23 08:53:57 crc kubenswrapper[5028]: I1123 08:53:57.179394 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85dc44d577-v4j2h"] Nov 23 08:53:57 crc kubenswrapper[5028]: I1123 08:53:57.179702 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon-log" containerID="cri-o://bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c" gracePeriod=30 Nov 23 08:53:57 crc kubenswrapper[5028]: I1123 08:53:57.182142 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" containerID="cri-o://f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500" gracePeriod=30 Nov 23 08:54:00 crc kubenswrapper[5028]: I1123 08:54:00.884123 5028 generic.go:334] "Generic (PLEG): container finished" podID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerID="f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500" exitCode=0 Nov 23 08:54:00 crc kubenswrapper[5028]: I1123 08:54:00.884773 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85dc44d577-v4j2h" event={"ID":"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597","Type":"ContainerDied","Data":"f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500"} Nov 23 08:54:00 crc kubenswrapper[5028]: I1123 08:54:00.946567 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:54:00 crc kubenswrapper[5028]: I1123 08:54:00.946686 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:54:01 crc kubenswrapper[5028]: I1123 08:54:01.573807 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Nov 23 08:54:05 crc kubenswrapper[5028]: I1123 08:54:05.288315 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-799587747-2v7jz" Nov 23 08:54:11 crc kubenswrapper[5028]: I1123 08:54:11.573144 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.507258 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g"] Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.510497 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.513889 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.531901 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g"] Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.605903 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.606222 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.606704 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95c4\" (UniqueName: \"kubernetes.io/projected/fcb68667-8c05-4e65-89d0-de18923a88cc-kube-api-access-n95c4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.709127 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95c4\" (UniqueName: \"kubernetes.io/projected/fcb68667-8c05-4e65-89d0-de18923a88cc-kube-api-access-n95c4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.709303 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.709354 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.709881 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.710148 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.752826 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95c4\" (UniqueName: \"kubernetes.io/projected/fcb68667-8c05-4e65-89d0-de18923a88cc-kube-api-access-n95c4\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:14 crc kubenswrapper[5028]: I1123 08:54:14.838027 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:15 crc kubenswrapper[5028]: I1123 08:54:15.351709 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g"] Nov 23 08:54:16 crc kubenswrapper[5028]: I1123 08:54:16.066616 5028 generic.go:334] "Generic (PLEG): container finished" podID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerID="e31fbace2e60538930c759474a267bc33c24792016886f7d34fc4d46361545c2" exitCode=0 Nov 23 08:54:16 crc kubenswrapper[5028]: I1123 08:54:16.066671 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" event={"ID":"fcb68667-8c05-4e65-89d0-de18923a88cc","Type":"ContainerDied","Data":"e31fbace2e60538930c759474a267bc33c24792016886f7d34fc4d46361545c2"} Nov 23 08:54:16 crc kubenswrapper[5028]: I1123 08:54:16.067262 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" event={"ID":"fcb68667-8c05-4e65-89d0-de18923a88cc","Type":"ContainerStarted","Data":"ccf7d37bd8143f1329b16b0ebe50a32ae2cdffe23372656c1e2776f328ad8d1d"} Nov 23 08:54:19 crc kubenswrapper[5028]: I1123 08:54:19.108037 5028 generic.go:334] "Generic (PLEG): container finished" podID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerID="bd82a85f5659174b541ce58ebb794484751e6d36572021652db2f224c90eb050" exitCode=0 Nov 23 08:54:19 crc kubenswrapper[5028]: I1123 08:54:19.108162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" event={"ID":"fcb68667-8c05-4e65-89d0-de18923a88cc","Type":"ContainerDied","Data":"bd82a85f5659174b541ce58ebb794484751e6d36572021652db2f224c90eb050"} Nov 23 08:54:20 crc kubenswrapper[5028]: I1123 08:54:20.131232 5028 generic.go:334] "Generic (PLEG): container finished" podID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerID="a3b11690504245a6f46089b789407058f2fe537e0a034c4b129cd376f1b86c6a" exitCode=0 Nov 23 08:54:20 crc kubenswrapper[5028]: I1123 08:54:20.131289 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" event={"ID":"fcb68667-8c05-4e65-89d0-de18923a88cc","Type":"ContainerDied","Data":"a3b11690504245a6f46089b789407058f2fe537e0a034c4b129cd376f1b86c6a"} Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.566737 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.575988 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-85dc44d577-v4j2h" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.105:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.105:8080: connect: connection refused" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.576122 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.715023 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-bundle\") pod \"fcb68667-8c05-4e65-89d0-de18923a88cc\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.715695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95c4\" (UniqueName: \"kubernetes.io/projected/fcb68667-8c05-4e65-89d0-de18923a88cc-kube-api-access-n95c4\") pod \"fcb68667-8c05-4e65-89d0-de18923a88cc\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.715813 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-util\") pod \"fcb68667-8c05-4e65-89d0-de18923a88cc\" (UID: \"fcb68667-8c05-4e65-89d0-de18923a88cc\") " Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.718196 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-bundle" (OuterVolumeSpecName: "bundle") pod "fcb68667-8c05-4e65-89d0-de18923a88cc" (UID: "fcb68667-8c05-4e65-89d0-de18923a88cc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.724907 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb68667-8c05-4e65-89d0-de18923a88cc-kube-api-access-n95c4" (OuterVolumeSpecName: "kube-api-access-n95c4") pod "fcb68667-8c05-4e65-89d0-de18923a88cc" (UID: "fcb68667-8c05-4e65-89d0-de18923a88cc"). InnerVolumeSpecName "kube-api-access-n95c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.726141 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-util" (OuterVolumeSpecName: "util") pod "fcb68667-8c05-4e65-89d0-de18923a88cc" (UID: "fcb68667-8c05-4e65-89d0-de18923a88cc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.818117 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95c4\" (UniqueName: \"kubernetes.io/projected/fcb68667-8c05-4e65-89d0-de18923a88cc-kube-api-access-n95c4\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.818157 5028 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-util\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:21 crc kubenswrapper[5028]: I1123 08:54:21.818170 5028 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fcb68667-8c05-4e65-89d0-de18923a88cc-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:22 crc kubenswrapper[5028]: I1123 08:54:22.161310 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" event={"ID":"fcb68667-8c05-4e65-89d0-de18923a88cc","Type":"ContainerDied","Data":"ccf7d37bd8143f1329b16b0ebe50a32ae2cdffe23372656c1e2776f328ad8d1d"} Nov 23 08:54:22 crc kubenswrapper[5028]: I1123 08:54:22.161840 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccf7d37bd8143f1329b16b0ebe50a32ae2cdffe23372656c1e2776f328ad8d1d" Nov 23 08:54:22 crc kubenswrapper[5028]: I1123 08:54:22.161444 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g" Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.180432 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fk4lw"] Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.182062 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fk4lw"] Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.220929 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9363-account-create-7cl9l"] Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.247313 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9363-account-create-7cl9l"] Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.761503 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.903760 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-horizon-secret-key\") pod \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.904103 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-scripts\") pod \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.904244 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-logs\") pod \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.904291 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9755g\" (UniqueName: \"kubernetes.io/projected/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-kube-api-access-9755g\") pod \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.904341 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-config-data\") pod \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\" (UID: \"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597\") " Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.906284 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-logs" (OuterVolumeSpecName: "logs") pod "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" (UID: "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.964478 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-kube-api-access-9755g" (OuterVolumeSpecName: "kube-api-access-9755g") pod "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" (UID: "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597"). InnerVolumeSpecName "kube-api-access-9755g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:54:27 crc kubenswrapper[5028]: I1123 08:54:27.971788 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" (UID: "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.012637 5028 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.012685 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-logs\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.012699 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9755g\" (UniqueName: \"kubernetes.io/projected/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-kube-api-access-9755g\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.015943 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-config-data" (OuterVolumeSpecName: "config-data") pod "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" (UID: "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.033500 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-scripts" (OuterVolumeSpecName: "scripts") pod "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" (UID: "94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.114939 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.115733 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.234335 5028 generic.go:334] "Generic (PLEG): container finished" podID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerID="bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c" exitCode=137 Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.234392 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85dc44d577-v4j2h" event={"ID":"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597","Type":"ContainerDied","Data":"bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c"} Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.234425 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85dc44d577-v4j2h" event={"ID":"94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597","Type":"ContainerDied","Data":"1d864055a5616b9c466921d7c8baf420cd79254e94add29b052a6804e148b5a2"} Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.234445 5028 scope.go:117] "RemoveContainer" containerID="f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.234441 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85dc44d577-v4j2h" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.303363 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85dc44d577-v4j2h"] Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.317832 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85dc44d577-v4j2h"] Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.320064 5028 scope.go:117] "RemoveContainer" containerID="8b38f5e69c321e10d3c96deb399e79ef7e0e2a4ee8ba4d16159c26a7ae10c244" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.481777 5028 scope.go:117] "RemoveContainer" containerID="bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.511387 5028 scope.go:117] "RemoveContainer" containerID="23c35f5539008078f45b336ef95d124dff98a70a8e9cb7eec6d89ea5a4688e9a" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.579202 5028 scope.go:117] "RemoveContainer" containerID="b2070a992faec4113ffac7548b341f74f9e46f23408a06b0fe5177c4e7175e1d" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.608138 5028 scope.go:117] "RemoveContainer" containerID="f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500" Nov 23 08:54:28 crc kubenswrapper[5028]: E1123 08:54:28.608656 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500\": container with ID starting with f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500 not found: ID does not exist" containerID="f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.608695 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500"} err="failed to get container status \"f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500\": rpc error: code = NotFound desc = could not find container \"f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500\": container with ID starting with f3f895d4104e2de28a7061964e63d488698a47d84780be345782b15bd0168500 not found: ID does not exist" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.608725 5028 scope.go:117] "RemoveContainer" containerID="bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c" Nov 23 08:54:28 crc kubenswrapper[5028]: E1123 08:54:28.608914 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c\": container with ID starting with bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c not found: ID does not exist" containerID="bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.608938 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c"} err="failed to get container status \"bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c\": rpc error: code = NotFound desc = could not find container \"bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c\": container with ID starting with bcfe278049c42895910e3cfbd7b562a202cdd6ee334a1cb4ef73b183720fff3c not found: ID does not exist" Nov 23 08:54:28 crc kubenswrapper[5028]: I1123 08:54:28.638806 5028 scope.go:117] "RemoveContainer" containerID="9ca5076792e782196270b773502415dc21d34469a04aed6057a0a9880a88be98" Nov 23 08:54:29 crc kubenswrapper[5028]: I1123 08:54:29.065880 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c" path="/var/lib/kubelet/pods/5a2fda3a-fa25-4e5c-8b3d-419e3b6f6b8c/volumes" Nov 23 08:54:29 crc kubenswrapper[5028]: I1123 08:54:29.066514 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="670cfcb3-3bbc-4029-81ec-db084de0cd16" path="/var/lib/kubelet/pods/670cfcb3-3bbc-4029-81ec-db084de0cd16/volumes" Nov 23 08:54:29 crc kubenswrapper[5028]: I1123 08:54:29.067153 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" path="/var/lib/kubelet/pods/94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597/volumes" Nov 23 08:54:30 crc kubenswrapper[5028]: I1123 08:54:30.951141 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 08:54:30 crc kubenswrapper[5028]: I1123 08:54:30.951669 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 08:54:30 crc kubenswrapper[5028]: I1123 08:54:30.951728 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 08:54:30 crc kubenswrapper[5028]: I1123 08:54:30.952796 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 08:54:30 crc kubenswrapper[5028]: I1123 08:54:30.952850 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" gracePeriod=600 Nov 23 08:54:31 crc kubenswrapper[5028]: E1123 08:54:31.128777 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:54:31 crc kubenswrapper[5028]: I1123 08:54:31.277194 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" exitCode=0 Nov 23 08:54:31 crc kubenswrapper[5028]: I1123 08:54:31.277259 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5"} Nov 23 08:54:31 crc kubenswrapper[5028]: I1123 08:54:31.277312 5028 scope.go:117] "RemoveContainer" containerID="a64b72f76a8fe768b7b1776afaa348b6635ec48477767013a700f559c89fe286" Nov 23 08:54:31 crc kubenswrapper[5028]: I1123 08:54:31.278508 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:54:31 crc kubenswrapper[5028]: E1123 08:54:31.278821 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.821518 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45"] Nov 23 08:54:32 crc kubenswrapper[5028]: E1123 08:54:32.822175 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="extract" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822188 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="extract" Nov 23 08:54:32 crc kubenswrapper[5028]: E1123 08:54:32.822210 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon-log" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822217 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon-log" Nov 23 08:54:32 crc kubenswrapper[5028]: E1123 08:54:32.822234 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="util" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822240 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="util" Nov 23 08:54:32 crc kubenswrapper[5028]: E1123 08:54:32.822251 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="pull" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822257 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="pull" Nov 23 08:54:32 crc kubenswrapper[5028]: E1123 08:54:32.822273 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822280 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822495 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon-log" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822508 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ef79bb-2fd5-4c87-a7d3-28cc6d8ae597" containerName="horizon" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.822519 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb68667-8c05-4e65-89d0-de18923a88cc" containerName="extract" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.823269 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.836824 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cjlrf" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.837240 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.837255 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.846786 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45"] Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.886778 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md"] Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.902111 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87dt\" (UniqueName: \"kubernetes.io/projected/6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f-kube-api-access-j87dt\") pod \"obo-prometheus-operator-668cf9dfbb-txn45\" (UID: \"6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.906570 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.909707 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-hsbdz" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.910128 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.927541 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv"] Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.929837 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.963149 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md"] Nov 23 08:54:32 crc kubenswrapper[5028]: I1123 08:54:32.974553 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv"] Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.009485 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7c0087-1818-46f5-a3cb-44d8d6664038-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv\" (UID: \"8c7c0087-1818-46f5-a3cb-44d8d6664038\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.009610 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7c0087-1818-46f5-a3cb-44d8d6664038-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv\" (UID: \"8c7c0087-1818-46f5-a3cb-44d8d6664038\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.009863 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87dt\" (UniqueName: \"kubernetes.io/projected/6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f-kube-api-access-j87dt\") pod \"obo-prometheus-operator-668cf9dfbb-txn45\" (UID: \"6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.040758 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87dt\" (UniqueName: \"kubernetes.io/projected/6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f-kube-api-access-j87dt\") pod \"obo-prometheus-operator-668cf9dfbb-txn45\" (UID: \"6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.070876 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6pplf"] Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.072746 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.077841 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ffr5c" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.078090 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.113591 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7c0087-1818-46f5-a3cb-44d8d6664038-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv\" (UID: \"8c7c0087-1818-46f5-a3cb-44d8d6664038\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.113673 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/248c4948-5223-40e3-baec-48dfd3c4877f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-956md\" (UID: \"248c4948-5223-40e3-baec-48dfd3c4877f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.113708 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7c0087-1818-46f5-a3cb-44d8d6664038-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv\" (UID: \"8c7c0087-1818-46f5-a3cb-44d8d6664038\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.113738 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/248c4948-5223-40e3-baec-48dfd3c4877f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-956md\" (UID: \"248c4948-5223-40e3-baec-48dfd3c4877f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.118431 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7c0087-1818-46f5-a3cb-44d8d6664038-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv\" (UID: \"8c7c0087-1818-46f5-a3cb-44d8d6664038\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.118638 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7c0087-1818-46f5-a3cb-44d8d6664038-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv\" (UID: \"8c7c0087-1818-46f5-a3cb-44d8d6664038\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.131058 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6pplf"] Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.148759 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.216008 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdgd\" (UniqueName: \"kubernetes.io/projected/b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f-kube-api-access-pcdgd\") pod \"observability-operator-d8bb48f5d-6pplf\" (UID: \"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f\") " pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.216432 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6pplf\" (UID: \"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f\") " pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.216520 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/248c4948-5223-40e3-baec-48dfd3c4877f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-956md\" (UID: \"248c4948-5223-40e3-baec-48dfd3c4877f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.216804 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/248c4948-5223-40e3-baec-48dfd3c4877f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-956md\" (UID: \"248c4948-5223-40e3-baec-48dfd3c4877f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.222934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/248c4948-5223-40e3-baec-48dfd3c4877f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-956md\" (UID: \"248c4948-5223-40e3-baec-48dfd3c4877f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.227415 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/248c4948-5223-40e3-baec-48dfd3c4877f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5595b49fcb-956md\" (UID: \"248c4948-5223-40e3-baec-48dfd3c4877f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.241535 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.277852 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.299668 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-nbsv6"] Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.310253 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.318071 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-fxn6c" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.328736 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-nbsv6"] Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.338763 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6pplf\" (UID: \"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f\") " pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.339456 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdgd\" (UniqueName: \"kubernetes.io/projected/b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f-kube-api-access-pcdgd\") pod \"observability-operator-d8bb48f5d-6pplf\" (UID: \"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f\") " pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.357246 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6pplf\" (UID: \"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f\") " pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.404913 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdgd\" (UniqueName: \"kubernetes.io/projected/b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f-kube-api-access-pcdgd\") pod \"observability-operator-d8bb48f5d-6pplf\" (UID: \"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f\") " pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.444020 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/56fa1b54-4a14-48db-81bb-77bf95b64209-openshift-service-ca\") pod \"perses-operator-5446b9c989-nbsv6\" (UID: \"56fa1b54-4a14-48db-81bb-77bf95b64209\") " pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.444090 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6crn\" (UniqueName: \"kubernetes.io/projected/56fa1b54-4a14-48db-81bb-77bf95b64209-kube-api-access-m6crn\") pod \"perses-operator-5446b9c989-nbsv6\" (UID: \"56fa1b54-4a14-48db-81bb-77bf95b64209\") " pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.547465 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/56fa1b54-4a14-48db-81bb-77bf95b64209-openshift-service-ca\") pod \"perses-operator-5446b9c989-nbsv6\" (UID: \"56fa1b54-4a14-48db-81bb-77bf95b64209\") " pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.547559 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6crn\" (UniqueName: \"kubernetes.io/projected/56fa1b54-4a14-48db-81bb-77bf95b64209-kube-api-access-m6crn\") pod \"perses-operator-5446b9c989-nbsv6\" (UID: \"56fa1b54-4a14-48db-81bb-77bf95b64209\") " pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.550419 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/56fa1b54-4a14-48db-81bb-77bf95b64209-openshift-service-ca\") pod \"perses-operator-5446b9c989-nbsv6\" (UID: \"56fa1b54-4a14-48db-81bb-77bf95b64209\") " pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.581196 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6crn\" (UniqueName: \"kubernetes.io/projected/56fa1b54-4a14-48db-81bb-77bf95b64209-kube-api-access-m6crn\") pod \"perses-operator-5446b9c989-nbsv6\" (UID: \"56fa1b54-4a14-48db-81bb-77bf95b64209\") " pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.588710 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:33 crc kubenswrapper[5028]: I1123 08:54:33.645673 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.102698 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45"] Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.228069 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv"] Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.358738 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" event={"ID":"8c7c0087-1818-46f5-a3cb-44d8d6664038","Type":"ContainerStarted","Data":"c041e4037d55ccd09b6d9b3428b92b038eef7e7d3f113e85ce13827392ad4988"} Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.360590 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" event={"ID":"6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f","Type":"ContainerStarted","Data":"c67a9788fc2e83c655f5e828b37cb77623cada34daaf9fdbb0d61554ad95b65e"} Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.455253 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6pplf"] Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.469750 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md"] Nov 23 08:54:34 crc kubenswrapper[5028]: W1123 08:54:34.474280 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c5ba76_b433_4fa6_a9e5_0a5565b2b91f.slice/crio-7144e8a36ca3611ed7bbd14fda0152d1c61172dbbe6054845ab953be67e0ab09 WatchSource:0}: Error finding container 7144e8a36ca3611ed7bbd14fda0152d1c61172dbbe6054845ab953be67e0ab09: Status 404 returned error can't find the container with id 7144e8a36ca3611ed7bbd14fda0152d1c61172dbbe6054845ab953be67e0ab09 Nov 23 08:54:34 crc kubenswrapper[5028]: W1123 08:54:34.475328 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod248c4948_5223_40e3_baec_48dfd3c4877f.slice/crio-13b6158cb40b342a489a779c6c8f75d2b4d30842544a55bc8284b362154a08a8 WatchSource:0}: Error finding container 13b6158cb40b342a489a779c6c8f75d2b4d30842544a55bc8284b362154a08a8: Status 404 returned error can't find the container with id 13b6158cb40b342a489a779c6c8f75d2b4d30842544a55bc8284b362154a08a8 Nov 23 08:54:34 crc kubenswrapper[5028]: W1123 08:54:34.605433 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56fa1b54_4a14_48db_81bb_77bf95b64209.slice/crio-c0abaf70cf65eca1f89fd3193d14aba9ef4c38018e6b183c2a77d828e908aa38 WatchSource:0}: Error finding container c0abaf70cf65eca1f89fd3193d14aba9ef4c38018e6b183c2a77d828e908aa38: Status 404 returned error can't find the container with id c0abaf70cf65eca1f89fd3193d14aba9ef4c38018e6b183c2a77d828e908aa38 Nov 23 08:54:34 crc kubenswrapper[5028]: I1123 08:54:34.606087 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-nbsv6"] Nov 23 08:54:35 crc kubenswrapper[5028]: I1123 08:54:35.387932 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" event={"ID":"56fa1b54-4a14-48db-81bb-77bf95b64209","Type":"ContainerStarted","Data":"c0abaf70cf65eca1f89fd3193d14aba9ef4c38018e6b183c2a77d828e908aa38"} Nov 23 08:54:35 crc kubenswrapper[5028]: I1123 08:54:35.412268 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" event={"ID":"248c4948-5223-40e3-baec-48dfd3c4877f","Type":"ContainerStarted","Data":"13b6158cb40b342a489a779c6c8f75d2b4d30842544a55bc8284b362154a08a8"} Nov 23 08:54:35 crc kubenswrapper[5028]: I1123 08:54:35.434175 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" event={"ID":"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f","Type":"ContainerStarted","Data":"7144e8a36ca3611ed7bbd14fda0152d1c61172dbbe6054845ab953be67e0ab09"} Nov 23 08:54:43 crc kubenswrapper[5028]: I1123 08:54:43.053359 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:54:43 crc kubenswrapper[5028]: E1123 08:54:43.054488 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.591467 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" event={"ID":"b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f","Type":"ContainerStarted","Data":"b5babae49de67a60ac118e2d8bfa42e1b3a9a5fa28430d456d2bf5f82a725a5e"} Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.592303 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.593179 5028 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-6pplf container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.121:8081/healthz\": dial tcp 10.217.1.121:8081: connect: connection refused" start-of-body= Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.593238 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" podUID="b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.121:8081/healthz\": dial tcp 10.217.1.121:8081: connect: connection refused" Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.593704 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" event={"ID":"56fa1b54-4a14-48db-81bb-77bf95b64209","Type":"ContainerStarted","Data":"45b375130d41f5e308c1e6a0458fadf757aea6dd55b2fe994ffd88cb02546982"} Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.593912 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.595256 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" event={"ID":"248c4948-5223-40e3-baec-48dfd3c4877f","Type":"ContainerStarted","Data":"bdfccc1110329ef774bd3fb6ae47013a90706fad48f21bdc752e7260fd259657"} Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.619773 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" podStartSLOduration=2.200279886 podStartE2EDuration="11.619748996s" podCreationTimestamp="2025-11-23 08:54:33 +0000 UTC" firstStartedPulling="2025-11-23 08:54:34.497770832 +0000 UTC m=+7458.195175611" lastFinishedPulling="2025-11-23 08:54:43.917239942 +0000 UTC m=+7467.614644721" observedRunningTime="2025-11-23 08:54:44.616638319 +0000 UTC m=+7468.314043098" watchObservedRunningTime="2025-11-23 08:54:44.619748996 +0000 UTC m=+7468.317153775" Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.691667 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-956md" podStartSLOduration=3.379077561 podStartE2EDuration="12.691646997s" podCreationTimestamp="2025-11-23 08:54:32 +0000 UTC" firstStartedPulling="2025-11-23 08:54:34.481334788 +0000 UTC m=+7458.178739577" lastFinishedPulling="2025-11-23 08:54:43.793904234 +0000 UTC m=+7467.491309013" observedRunningTime="2025-11-23 08:54:44.651034037 +0000 UTC m=+7468.348438816" watchObservedRunningTime="2025-11-23 08:54:44.691646997 +0000 UTC m=+7468.389051766" Nov 23 08:54:44 crc kubenswrapper[5028]: I1123 08:54:44.693133 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" podStartSLOduration=2.403746448 podStartE2EDuration="11.693126283s" podCreationTimestamp="2025-11-23 08:54:33 +0000 UTC" firstStartedPulling="2025-11-23 08:54:34.607664819 +0000 UTC m=+7458.305069598" lastFinishedPulling="2025-11-23 08:54:43.897044664 +0000 UTC m=+7467.594449433" observedRunningTime="2025-11-23 08:54:44.691112294 +0000 UTC m=+7468.388517073" watchObservedRunningTime="2025-11-23 08:54:44.693126283 +0000 UTC m=+7468.390531062" Nov 23 08:54:45 crc kubenswrapper[5028]: I1123 08:54:45.608376 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" event={"ID":"8c7c0087-1818-46f5-a3cb-44d8d6664038","Type":"ContainerStarted","Data":"ac0d7449fbf31201679591b94e0ddc4d09a6cf3dd0d96f2b29639548f042c1b7"} Nov 23 08:54:45 crc kubenswrapper[5028]: I1123 08:54:45.613544 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" event={"ID":"6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f","Type":"ContainerStarted","Data":"57cde5c1a7674b1953d0f0397ef56dba1046ed514c45dbad8ff9a185dc0f0a8e"} Nov 23 08:54:45 crc kubenswrapper[5028]: I1123 08:54:45.635551 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv" podStartSLOduration=4.096444051 podStartE2EDuration="13.635526207s" podCreationTimestamp="2025-11-23 08:54:32 +0000 UTC" firstStartedPulling="2025-11-23 08:54:34.240372452 +0000 UTC m=+7457.937777241" lastFinishedPulling="2025-11-23 08:54:43.779454618 +0000 UTC m=+7467.476859397" observedRunningTime="2025-11-23 08:54:45.627829837 +0000 UTC m=+7469.325234636" watchObservedRunningTime="2025-11-23 08:54:45.635526207 +0000 UTC m=+7469.332931006" Nov 23 08:54:45 crc kubenswrapper[5028]: I1123 08:54:45.681074 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-6pplf" Nov 23 08:54:45 crc kubenswrapper[5028]: I1123 08:54:45.692334 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-txn45" podStartSLOduration=3.945138695 podStartE2EDuration="13.692309746s" podCreationTimestamp="2025-11-23 08:54:32 +0000 UTC" firstStartedPulling="2025-11-23 08:54:34.125703708 +0000 UTC m=+7457.823108487" lastFinishedPulling="2025-11-23 08:54:43.872874759 +0000 UTC m=+7467.570279538" observedRunningTime="2025-11-23 08:54:45.686777879 +0000 UTC m=+7469.384182658" watchObservedRunningTime="2025-11-23 08:54:45.692309746 +0000 UTC m=+7469.389714535" Nov 23 08:54:51 crc kubenswrapper[5028]: I1123 08:54:51.049500 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-79hbz"] Nov 23 08:54:51 crc kubenswrapper[5028]: I1123 08:54:51.070089 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-79hbz"] Nov 23 08:54:53 crc kubenswrapper[5028]: I1123 08:54:53.079589 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1" path="/var/lib/kubelet/pods/5a4021ec-4c03-43d9-bb3c-e703d9ec0ef1/volumes" Nov 23 08:54:53 crc kubenswrapper[5028]: I1123 08:54:53.650035 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-nbsv6" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.505132 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.505885 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" containerName="openstackclient" containerID="cri-o://8c12aafbf21cd516e847e4ab88b5f3351deb8a99c90b60acd1c30f036fd53f85" gracePeriod=2 Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.524175 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.605343 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 23 08:54:56 crc kubenswrapper[5028]: E1123 08:54:56.606429 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" containerName="openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.606460 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" containerName="openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.606677 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" containerName="openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.607564 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.631633 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.674801 5028 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" podUID="0de41a86-f333-4d3f-b4aa-e7d62efeb3a3" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.723175 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-openstack-config\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.723277 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.723421 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6km\" (UniqueName: \"kubernetes.io/projected/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-kube-api-access-2c6km\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.825824 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.825993 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c6km\" (UniqueName: \"kubernetes.io/projected/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-kube-api-access-2c6km\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.826067 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-openstack-config\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.827122 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-openstack-config\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.837703 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-openstack-config-secret\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.866304 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.868013 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.881936 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c6km\" (UniqueName: \"kubernetes.io/projected/0de41a86-f333-4d3f-b4aa-e7d62efeb3a3-kube-api-access-2c6km\") pod \"openstackclient\" (UID: \"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3\") " pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.884093 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-w4nms" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.957198 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:54:56 crc kubenswrapper[5028]: I1123 08:54:56.972908 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:54:57 crc kubenswrapper[5028]: I1123 08:54:57.032454 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdjc\" (UniqueName: \"kubernetes.io/projected/3e1ee542-5c64-4f1f-884e-959cdbee781c-kube-api-access-fsdjc\") pod \"kube-state-metrics-0\" (UID: \"3e1ee542-5c64-4f1f-884e-959cdbee781c\") " pod="openstack/kube-state-metrics-0" Nov 23 08:54:57 crc kubenswrapper[5028]: I1123 08:54:57.135219 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsdjc\" (UniqueName: \"kubernetes.io/projected/3e1ee542-5c64-4f1f-884e-959cdbee781c-kube-api-access-fsdjc\") pod \"kube-state-metrics-0\" (UID: \"3e1ee542-5c64-4f1f-884e-959cdbee781c\") " pod="openstack/kube-state-metrics-0" Nov 23 08:54:57 crc kubenswrapper[5028]: I1123 08:54:57.259132 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsdjc\" (UniqueName: \"kubernetes.io/projected/3e1ee542-5c64-4f1f-884e-959cdbee781c-kube-api-access-fsdjc\") pod \"kube-state-metrics-0\" (UID: \"3e1ee542-5c64-4f1f-884e-959cdbee781c\") " pod="openstack/kube-state-metrics-0" Nov 23 08:54:57 crc kubenswrapper[5028]: I1123 08:54:57.270127 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.055161 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:54:58 crc kubenswrapper[5028]: E1123 08:54:58.055925 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.407572 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.413875 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.430752 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.431017 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.431157 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.431280 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.431432 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-w7cwl" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.450778 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.522668 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.579493 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.580657 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c7db62cc-1b79-4241-9006-7c24e5e18e21-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.584511 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82rt\" (UniqueName: \"kubernetes.io/projected/c7db62cc-1b79-4241-9006-7c24e5e18e21-kube-api-access-t82rt\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.584698 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7db62cc-1b79-4241-9006-7c24e5e18e21-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.584815 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7db62cc-1b79-4241-9006-7c24e5e18e21-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.585184 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.585313 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.585526 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.687395 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c7db62cc-1b79-4241-9006-7c24e5e18e21-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.687780 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82rt\" (UniqueName: \"kubernetes.io/projected/c7db62cc-1b79-4241-9006-7c24e5e18e21-kube-api-access-t82rt\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.687817 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7db62cc-1b79-4241-9006-7c24e5e18e21-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.687839 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7db62cc-1b79-4241-9006-7c24e5e18e21-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.687923 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.687974 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.688017 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.691928 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.693358 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/c7db62cc-1b79-4241-9006-7c24e5e18e21-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.700790 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.701171 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c7db62cc-1b79-4241-9006-7c24e5e18e21-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.713530 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c7db62cc-1b79-4241-9006-7c24e5e18e21-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.716667 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c7db62cc-1b79-4241-9006-7c24e5e18e21-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.726716 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82rt\" (UniqueName: \"kubernetes.io/projected/c7db62cc-1b79-4241-9006-7c24e5e18e21-kube-api-access-t82rt\") pod \"alertmanager-metric-storage-0\" (UID: \"c7db62cc-1b79-4241-9006-7c24e5e18e21\") " pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.754686 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.767137 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3","Type":"ContainerStarted","Data":"bacbd7855afbded18ebabe74ee401b54d283911305a0aab199b2622542193468"} Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.768720 5028 generic.go:334] "Generic (PLEG): container finished" podID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" containerID="8c12aafbf21cd516e847e4ab88b5f3351deb8a99c90b60acd1c30f036fd53f85" exitCode=137 Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.769619 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e1ee542-5c64-4f1f-884e-959cdbee781c","Type":"ContainerStarted","Data":"3bf66be3b96725a8dd5a4a38789bd45b3b0f396529169da127a97bb3c80f09ed"} Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.851873 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.854404 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.861707 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.861983 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.866764 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.867059 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.867864 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-mqdjf" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.876166 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 23 08:54:58 crc kubenswrapper[5028]: I1123 08:54:58.920480 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.001593 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9s4s\" (UniqueName: \"kubernetes.io/projected/f5dc4c10-d411-4a0e-b0de-5e9191d87531-kube-api-access-z9s4s\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.001932 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f5dc4c10-d411-4a0e-b0de-5e9191d87531-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.002075 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-config\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.002172 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.002292 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.002446 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f5dc4c10-d411-4a0e-b0de-5e9191d87531-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.002568 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f5dc4c10-d411-4a0e-b0de-5e9191d87531-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.002737 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.105420 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-config\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.105933 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.106006 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.106080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f5dc4c10-d411-4a0e-b0de-5e9191d87531-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.106127 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f5dc4c10-d411-4a0e-b0de-5e9191d87531-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.106184 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.106243 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9s4s\" (UniqueName: \"kubernetes.io/projected/f5dc4c10-d411-4a0e-b0de-5e9191d87531-kube-api-access-z9s4s\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.106269 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f5dc4c10-d411-4a0e-b0de-5e9191d87531-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.107594 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f5dc4c10-d411-4a0e-b0de-5e9191d87531-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.121315 5028 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.121362 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cb5726ed4c9b4e35a6c11ec9770431a99b69777bff1e60acf0a7c9082502f090/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.135884 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.145329 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-config\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.149450 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f5dc4c10-d411-4a0e-b0de-5e9191d87531-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.149672 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f5dc4c10-d411-4a0e-b0de-5e9191d87531-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.153403 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f5dc4c10-d411-4a0e-b0de-5e9191d87531-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.166970 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9s4s\" (UniqueName: \"kubernetes.io/projected/f5dc4c10-d411-4a0e-b0de-5e9191d87531-kube-api-access-z9s4s\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.308911 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.317240 5028 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" podUID="0de41a86-f333-4d3f-b4aa-e7d62efeb3a3" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.338178 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d77d7a31-1f7d-49aa-bddc-0987eed7b70d\") pod \"prometheus-metric-storage-0\" (UID: \"f5dc4c10-d411-4a0e-b0de-5e9191d87531\") " pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.413818 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxp2g\" (UniqueName: \"kubernetes.io/projected/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-kube-api-access-bxp2g\") pod \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.414134 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config-secret\") pod \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.414233 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config\") pod \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\" (UID: \"563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41\") " Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.423190 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-kube-api-access-bxp2g" (OuterVolumeSpecName: "kube-api-access-bxp2g") pod "563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" (UID: "563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41"). InnerVolumeSpecName "kube-api-access-bxp2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.446977 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" (UID: "563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.480219 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" (UID: "563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.516746 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.516794 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.516810 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxp2g\" (UniqueName: \"kubernetes.io/projected/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41-kube-api-access-bxp2g\") on node \"crc\" DevicePath \"\"" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.556665 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.572611 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.801978 5028 scope.go:117] "RemoveContainer" containerID="8c12aafbf21cd516e847e4ab88b5f3351deb8a99c90b60acd1c30f036fd53f85" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.802693 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.808568 5028 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" podUID="0de41a86-f333-4d3f-b4aa-e7d62efeb3a3" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.817202 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c7db62cc-1b79-4241-9006-7c24e5e18e21","Type":"ContainerStarted","Data":"e1881ff65f434176932a3b80f2993c648ee0f70826d36a714c858091396098e5"} Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.871401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0de41a86-f333-4d3f-b4aa-e7d62efeb3a3","Type":"ContainerStarted","Data":"c41dba4c15bd3be8219f5408dc877c6e34ad093f8a8912be173aa7f417e5e22f"} Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.923645 5028 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" podUID="0de41a86-f333-4d3f-b4aa-e7d62efeb3a3" Nov 23 08:54:59 crc kubenswrapper[5028]: I1123 08:54:59.941840 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.941780757 podStartE2EDuration="3.941780757s" podCreationTimestamp="2025-11-23 08:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:54:59.915598502 +0000 UTC m=+7483.613003301" watchObservedRunningTime="2025-11-23 08:54:59.941780757 +0000 UTC m=+7483.639185536" Nov 23 08:55:00 crc kubenswrapper[5028]: I1123 08:55:00.033650 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 23 08:55:00 crc kubenswrapper[5028]: I1123 08:55:00.913865 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f5dc4c10-d411-4a0e-b0de-5e9191d87531","Type":"ContainerStarted","Data":"ee1286755601cff8907a6da1b8049ebe30c32dbb0eaaa3b840fd11eea958972e"} Nov 23 08:55:00 crc kubenswrapper[5028]: I1123 08:55:00.918003 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3e1ee542-5c64-4f1f-884e-959cdbee781c","Type":"ContainerStarted","Data":"6be0133a6e61acb26391b07b37f5eb931ea1580ed96dc5f6b01d0c1e35dc8808"} Nov 23 08:55:00 crc kubenswrapper[5028]: I1123 08:55:00.938105 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.075578893 podStartE2EDuration="4.938079418s" podCreationTimestamp="2025-11-23 08:54:56 +0000 UTC" firstStartedPulling="2025-11-23 08:54:58.677458374 +0000 UTC m=+7482.374863153" lastFinishedPulling="2025-11-23 08:54:59.539958899 +0000 UTC m=+7483.237363678" observedRunningTime="2025-11-23 08:55:00.932356237 +0000 UTC m=+7484.629761016" watchObservedRunningTime="2025-11-23 08:55:00.938079418 +0000 UTC m=+7484.635484197" Nov 23 08:55:01 crc kubenswrapper[5028]: I1123 08:55:01.065045 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41" path="/var/lib/kubelet/pods/563d9dc8-9dfd-4e9f-99e2-78f24bbf8d41/volumes" Nov 23 08:55:01 crc kubenswrapper[5028]: I1123 08:55:01.931866 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 23 08:55:07 crc kubenswrapper[5028]: I1123 08:55:07.001749 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c7db62cc-1b79-4241-9006-7c24e5e18e21","Type":"ContainerStarted","Data":"595e96c40b7cb231eb28c8cb8398a11fb1f23cc5c7669aca27e1d3c6e96167b0"} Nov 23 08:55:07 crc kubenswrapper[5028]: I1123 08:55:07.278979 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 23 08:55:08 crc kubenswrapper[5028]: I1123 08:55:08.022001 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f5dc4c10-d411-4a0e-b0de-5e9191d87531","Type":"ContainerStarted","Data":"d4b1ec56847763dfa1778bdbd7c42ac69622a08a5c4307d182b3f97347170724"} Nov 23 08:55:09 crc kubenswrapper[5028]: I1123 08:55:09.053393 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:55:09 crc kubenswrapper[5028]: E1123 08:55:09.053735 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:55:16 crc kubenswrapper[5028]: I1123 08:55:16.135969 5028 generic.go:334] "Generic (PLEG): container finished" podID="c7db62cc-1b79-4241-9006-7c24e5e18e21" containerID="595e96c40b7cb231eb28c8cb8398a11fb1f23cc5c7669aca27e1d3c6e96167b0" exitCode=0 Nov 23 08:55:16 crc kubenswrapper[5028]: I1123 08:55:16.136078 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c7db62cc-1b79-4241-9006-7c24e5e18e21","Type":"ContainerDied","Data":"595e96c40b7cb231eb28c8cb8398a11fb1f23cc5c7669aca27e1d3c6e96167b0"} Nov 23 08:55:16 crc kubenswrapper[5028]: I1123 08:55:16.140200 5028 generic.go:334] "Generic (PLEG): container finished" podID="f5dc4c10-d411-4a0e-b0de-5e9191d87531" containerID="d4b1ec56847763dfa1778bdbd7c42ac69622a08a5c4307d182b3f97347170724" exitCode=0 Nov 23 08:55:16 crc kubenswrapper[5028]: I1123 08:55:16.140268 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f5dc4c10-d411-4a0e-b0de-5e9191d87531","Type":"ContainerDied","Data":"d4b1ec56847763dfa1778bdbd7c42ac69622a08a5c4307d182b3f97347170724"} Nov 23 08:55:19 crc kubenswrapper[5028]: I1123 08:55:19.197265 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c7db62cc-1b79-4241-9006-7c24e5e18e21","Type":"ContainerStarted","Data":"c5888a551bfbc36278a30f8255bb775afae8dd64d4fb31f888756c79ef508992"} Nov 23 08:55:20 crc kubenswrapper[5028]: I1123 08:55:20.058205 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bcda-account-create-xw4g8"] Nov 23 08:55:20 crc kubenswrapper[5028]: I1123 08:55:20.069444 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bcda-account-create-xw4g8"] Nov 23 08:55:21 crc kubenswrapper[5028]: I1123 08:55:21.044532 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xv4d8"] Nov 23 08:55:21 crc kubenswrapper[5028]: I1123 08:55:21.054267 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:55:21 crc kubenswrapper[5028]: E1123 08:55:21.054566 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:55:21 crc kubenswrapper[5028]: I1123 08:55:21.070056 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe52fd0d-7df3-41ba-9339-0789c17b27c2" path="/var/lib/kubelet/pods/fe52fd0d-7df3-41ba-9339-0789c17b27c2/volumes" Nov 23 08:55:21 crc kubenswrapper[5028]: I1123 08:55:21.070842 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xv4d8"] Nov 23 08:55:22 crc kubenswrapper[5028]: I1123 08:55:22.234881 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f5dc4c10-d411-4a0e-b0de-5e9191d87531","Type":"ContainerStarted","Data":"fae41056ff2b41e493e027f3933b3fa40414ff8600148d2c905acedd3104af28"} Nov 23 08:55:23 crc kubenswrapper[5028]: I1123 08:55:23.069014 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3994798-d137-4748-8b00-fc218bfe4481" path="/var/lib/kubelet/pods/b3994798-d137-4748-8b00-fc218bfe4481/volumes" Nov 23 08:55:23 crc kubenswrapper[5028]: I1123 08:55:23.251293 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"c7db62cc-1b79-4241-9006-7c24e5e18e21","Type":"ContainerStarted","Data":"ace833304354b107e0d82d10a61d80df8454e9c28dbaf0834e82dd3434176a2f"} Nov 23 08:55:23 crc kubenswrapper[5028]: I1123 08:55:23.252094 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:55:23 crc kubenswrapper[5028]: I1123 08:55:23.257353 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 23 08:55:23 crc kubenswrapper[5028]: I1123 08:55:23.299768 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.296644904 podStartE2EDuration="25.299736617s" podCreationTimestamp="2025-11-23 08:54:58 +0000 UTC" firstStartedPulling="2025-11-23 08:54:59.59316829 +0000 UTC m=+7483.290573069" lastFinishedPulling="2025-11-23 08:55:18.596260003 +0000 UTC m=+7502.293664782" observedRunningTime="2025-11-23 08:55:23.291199817 +0000 UTC m=+7506.988604616" watchObservedRunningTime="2025-11-23 08:55:23.299736617 +0000 UTC m=+7506.997141396" Nov 23 08:55:28 crc kubenswrapper[5028]: I1123 08:55:28.996157 5028 scope.go:117] "RemoveContainer" containerID="8bac3942def42e62826f18c91659afb354cf91f434d4677f5d940593d06af59b" Nov 23 08:55:29 crc kubenswrapper[5028]: I1123 08:55:29.030757 5028 scope.go:117] "RemoveContainer" containerID="2bfd9e29ec3259e869d9ce6f516fb3c4f877cbfea0bd9c2a98dd1d1d5f86b8d3" Nov 23 08:55:29 crc kubenswrapper[5028]: I1123 08:55:29.186458 5028 scope.go:117] "RemoveContainer" containerID="37d15c5206eac3e3e22ec4e6a2865467410d5060d9cd187357317e950df38882" Nov 23 08:55:29 crc kubenswrapper[5028]: I1123 08:55:29.314343 5028 scope.go:117] "RemoveContainer" containerID="04027337ddb311f3a01680b248e2c01061df27aaa9aba27f34acf1a012e2d182" Nov 23 08:55:29 crc kubenswrapper[5028]: I1123 08:55:29.330827 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f5dc4c10-d411-4a0e-b0de-5e9191d87531","Type":"ContainerStarted","Data":"181affb4e0b3fdbb8a094a0e8840b4d91cb1b89b61425d22b7740fa94d0b464c"} Nov 23 08:55:29 crc kubenswrapper[5028]: I1123 08:55:29.363130 5028 scope.go:117] "RemoveContainer" containerID="018aefdeccc74c1cea3f0edda3f7c5d4ddfc9e6d0a02b788ff44ae88a5f91a09" Nov 23 08:55:31 crc kubenswrapper[5028]: I1123 08:55:31.045918 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-thhmt"] Nov 23 08:55:31 crc kubenswrapper[5028]: I1123 08:55:31.070635 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-thhmt"] Nov 23 08:55:33 crc kubenswrapper[5028]: I1123 08:55:33.067595 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d276c8a-cc60-421c-95d7-4182305d9e52" path="/var/lib/kubelet/pods/1d276c8a-cc60-421c-95d7-4182305d9e52/volumes" Nov 23 08:55:33 crc kubenswrapper[5028]: I1123 08:55:33.394668 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f5dc4c10-d411-4a0e-b0de-5e9191d87531","Type":"ContainerStarted","Data":"a0b94ff03c0df006c89f4e08e69150814a7df21bc6fc6a27e5bdfe11d46fbc85"} Nov 23 08:55:33 crc kubenswrapper[5028]: I1123 08:55:33.447604 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.048963819 podStartE2EDuration="36.447577197s" podCreationTimestamp="2025-11-23 08:54:57 +0000 UTC" firstStartedPulling="2025-11-23 08:55:00.030303697 +0000 UTC m=+7483.727708476" lastFinishedPulling="2025-11-23 08:55:32.428917075 +0000 UTC m=+7516.126321854" observedRunningTime="2025-11-23 08:55:33.441334923 +0000 UTC m=+7517.138739712" watchObservedRunningTime="2025-11-23 08:55:33.447577197 +0000 UTC m=+7517.144981986" Nov 23 08:55:34 crc kubenswrapper[5028]: I1123 08:55:34.578704 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 23 08:55:35 crc kubenswrapper[5028]: I1123 08:55:35.053385 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:55:35 crc kubenswrapper[5028]: E1123 08:55:35.054092 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.173092 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.177667 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.191835 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.192145 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.201163 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.314062 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.314160 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-run-httpd\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.314223 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.314377 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-scripts\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.314582 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-config-data\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.315155 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-log-httpd\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.315452 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl28t\" (UniqueName: \"kubernetes.io/projected/521d90b6-8e3d-4851-98b4-fec7ab2daf20-kube-api-access-fl28t\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.417891 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-config-data\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418019 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-log-httpd\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418081 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl28t\" (UniqueName: \"kubernetes.io/projected/521d90b6-8e3d-4851-98b4-fec7ab2daf20-kube-api-access-fl28t\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418129 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418152 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-run-httpd\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418196 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418221 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-scripts\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418849 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-log-httpd\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.418981 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-run-httpd\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.428176 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.430833 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.433025 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-config-data\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.436439 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-scripts\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.438472 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl28t\" (UniqueName: \"kubernetes.io/projected/521d90b6-8e3d-4851-98b4-fec7ab2daf20-kube-api-access-fl28t\") pod \"ceilometer-0\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " pod="openstack/ceilometer-0" Nov 23 08:55:37 crc kubenswrapper[5028]: I1123 08:55:37.513869 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:55:38 crc kubenswrapper[5028]: I1123 08:55:38.092865 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:55:38 crc kubenswrapper[5028]: I1123 08:55:38.460817 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerStarted","Data":"196ae62500fb8baba7609aa33b662400dfe86dff1ca4a393deb6a044bb0293f4"} Nov 23 08:55:42 crc kubenswrapper[5028]: I1123 08:55:42.512000 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerStarted","Data":"a793a0c42c79ed172baf8190aa784120184082fc86fabd04ee5e2d8c3a517ec8"} Nov 23 08:55:43 crc kubenswrapper[5028]: I1123 08:55:43.527244 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerStarted","Data":"954916c27b2aea755c24dad840ca98a0d5be758f18090531af57f5252f2b9448"} Nov 23 08:55:44 crc kubenswrapper[5028]: I1123 08:55:44.539753 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerStarted","Data":"6317ebb60e804b28de2ee438b9a33162e4091c86c13427568ac694c60e1e9a44"} Nov 23 08:55:44 crc kubenswrapper[5028]: I1123 08:55:44.579129 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 23 08:55:44 crc kubenswrapper[5028]: I1123 08:55:44.581292 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 23 08:55:45 crc kubenswrapper[5028]: I1123 08:55:45.559632 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 23 08:55:46 crc kubenswrapper[5028]: I1123 08:55:46.571980 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerStarted","Data":"53422ddc9c67458b25f3f525f1b5fe6afaff557cb223cfc071cc2676e1996719"} Nov 23 08:55:46 crc kubenswrapper[5028]: I1123 08:55:46.595279 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.230871749 podStartE2EDuration="9.595260448s" podCreationTimestamp="2025-11-23 08:55:37 +0000 UTC" firstStartedPulling="2025-11-23 08:55:38.123185155 +0000 UTC m=+7521.820589934" lastFinishedPulling="2025-11-23 08:55:45.487573854 +0000 UTC m=+7529.184978633" observedRunningTime="2025-11-23 08:55:46.593114526 +0000 UTC m=+7530.290519315" watchObservedRunningTime="2025-11-23 08:55:46.595260448 +0000 UTC m=+7530.292665217" Nov 23 08:55:47 crc kubenswrapper[5028]: I1123 08:55:47.581009 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 08:55:48 crc kubenswrapper[5028]: I1123 08:55:48.055099 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:55:48 crc kubenswrapper[5028]: E1123 08:55:48.055326 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.323934 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-gz528"] Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.331496 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.336254 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gz528"] Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.430756 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-50af-account-create-gts6s"] Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.434726 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.440330 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.444974 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-50af-account-create-gts6s"] Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.517990 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjb6b\" (UniqueName: \"kubernetes.io/projected/653c3e93-a5bd-4421-9e67-196a1bec03b4-kube-api-access-kjb6b\") pod \"aodh-db-create-gz528\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.518184 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653c3e93-a5bd-4421-9e67-196a1bec03b4-operator-scripts\") pod \"aodh-db-create-gz528\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.621002 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19b1975-bb33-46c3-84d0-ad7540831dae-operator-scripts\") pod \"aodh-50af-account-create-gts6s\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.621121 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653c3e93-a5bd-4421-9e67-196a1bec03b4-operator-scripts\") pod \"aodh-db-create-gz528\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.621332 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjb6b\" (UniqueName: \"kubernetes.io/projected/653c3e93-a5bd-4421-9e67-196a1bec03b4-kube-api-access-kjb6b\") pod \"aodh-db-create-gz528\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.621385 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdwq\" (UniqueName: \"kubernetes.io/projected/a19b1975-bb33-46c3-84d0-ad7540831dae-kube-api-access-pjdwq\") pod \"aodh-50af-account-create-gts6s\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.622245 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653c3e93-a5bd-4421-9e67-196a1bec03b4-operator-scripts\") pod \"aodh-db-create-gz528\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.644450 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjb6b\" (UniqueName: \"kubernetes.io/projected/653c3e93-a5bd-4421-9e67-196a1bec03b4-kube-api-access-kjb6b\") pod \"aodh-db-create-gz528\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.668283 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gz528" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.723940 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdwq\" (UniqueName: \"kubernetes.io/projected/a19b1975-bb33-46c3-84d0-ad7540831dae-kube-api-access-pjdwq\") pod \"aodh-50af-account-create-gts6s\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.724106 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19b1975-bb33-46c3-84d0-ad7540831dae-operator-scripts\") pod \"aodh-50af-account-create-gts6s\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.725136 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19b1975-bb33-46c3-84d0-ad7540831dae-operator-scripts\") pod \"aodh-50af-account-create-gts6s\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.748616 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdwq\" (UniqueName: \"kubernetes.io/projected/a19b1975-bb33-46c3-84d0-ad7540831dae-kube-api-access-pjdwq\") pod \"aodh-50af-account-create-gts6s\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:53 crc kubenswrapper[5028]: I1123 08:55:53.757635 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:54 crc kubenswrapper[5028]: I1123 08:55:54.269842 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-gz528"] Nov 23 08:55:54 crc kubenswrapper[5028]: I1123 08:55:54.391485 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-50af-account-create-gts6s"] Nov 23 08:55:54 crc kubenswrapper[5028]: W1123 08:55:54.410098 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19b1975_bb33_46c3_84d0_ad7540831dae.slice/crio-45d282ac4928e878eedcde000945e612dbdc3945d89bd66db96cb739f6d09344 WatchSource:0}: Error finding container 45d282ac4928e878eedcde000945e612dbdc3945d89bd66db96cb739f6d09344: Status 404 returned error can't find the container with id 45d282ac4928e878eedcde000945e612dbdc3945d89bd66db96cb739f6d09344 Nov 23 08:55:54 crc kubenswrapper[5028]: I1123 08:55:54.687162 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gz528" event={"ID":"653c3e93-a5bd-4421-9e67-196a1bec03b4","Type":"ContainerStarted","Data":"804e8a54a42640c946f206b04c16754adf4f349e8b224b1192a5c9d41ab9556a"} Nov 23 08:55:54 crc kubenswrapper[5028]: I1123 08:55:54.687223 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gz528" event={"ID":"653c3e93-a5bd-4421-9e67-196a1bec03b4","Type":"ContainerStarted","Data":"09df786dc3080cea0f1ff829ff57b7ba50d50a9be37fb8a232b1f7c9b12b45a5"} Nov 23 08:55:54 crc kubenswrapper[5028]: I1123 08:55:54.689235 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-50af-account-create-gts6s" event={"ID":"a19b1975-bb33-46c3-84d0-ad7540831dae","Type":"ContainerStarted","Data":"45d282ac4928e878eedcde000945e612dbdc3945d89bd66db96cb739f6d09344"} Nov 23 08:55:54 crc kubenswrapper[5028]: I1123 08:55:54.712510 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-gz528" podStartSLOduration=1.712486132 podStartE2EDuration="1.712486132s" podCreationTimestamp="2025-11-23 08:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:55:54.705444458 +0000 UTC m=+7538.402849237" watchObservedRunningTime="2025-11-23 08:55:54.712486132 +0000 UTC m=+7538.409890911" Nov 23 08:55:55 crc kubenswrapper[5028]: I1123 08:55:55.733845 5028 generic.go:334] "Generic (PLEG): container finished" podID="653c3e93-a5bd-4421-9e67-196a1bec03b4" containerID="804e8a54a42640c946f206b04c16754adf4f349e8b224b1192a5c9d41ab9556a" exitCode=0 Nov 23 08:55:55 crc kubenswrapper[5028]: I1123 08:55:55.734639 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gz528" event={"ID":"653c3e93-a5bd-4421-9e67-196a1bec03b4","Type":"ContainerDied","Data":"804e8a54a42640c946f206b04c16754adf4f349e8b224b1192a5c9d41ab9556a"} Nov 23 08:55:55 crc kubenswrapper[5028]: I1123 08:55:55.743997 5028 generic.go:334] "Generic (PLEG): container finished" podID="a19b1975-bb33-46c3-84d0-ad7540831dae" containerID="9b6c7d83e69ea31626205c72e1cfa2372063da0de7c38758ac01a1e163a882a9" exitCode=0 Nov 23 08:55:55 crc kubenswrapper[5028]: I1123 08:55:55.744131 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-50af-account-create-gts6s" event={"ID":"a19b1975-bb33-46c3-84d0-ad7540831dae","Type":"ContainerDied","Data":"9b6c7d83e69ea31626205c72e1cfa2372063da0de7c38758ac01a1e163a882a9"} Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.265577 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gz528" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.326929 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjb6b\" (UniqueName: \"kubernetes.io/projected/653c3e93-a5bd-4421-9e67-196a1bec03b4-kube-api-access-kjb6b\") pod \"653c3e93-a5bd-4421-9e67-196a1bec03b4\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.327471 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653c3e93-a5bd-4421-9e67-196a1bec03b4-operator-scripts\") pod \"653c3e93-a5bd-4421-9e67-196a1bec03b4\" (UID: \"653c3e93-a5bd-4421-9e67-196a1bec03b4\") " Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.328177 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/653c3e93-a5bd-4421-9e67-196a1bec03b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "653c3e93-a5bd-4421-9e67-196a1bec03b4" (UID: "653c3e93-a5bd-4421-9e67-196a1bec03b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.334818 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653c3e93-a5bd-4421-9e67-196a1bec03b4-kube-api-access-kjb6b" (OuterVolumeSpecName: "kube-api-access-kjb6b") pod "653c3e93-a5bd-4421-9e67-196a1bec03b4" (UID: "653c3e93-a5bd-4421-9e67-196a1bec03b4"). InnerVolumeSpecName "kube-api-access-kjb6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.391851 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.429168 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjdwq\" (UniqueName: \"kubernetes.io/projected/a19b1975-bb33-46c3-84d0-ad7540831dae-kube-api-access-pjdwq\") pod \"a19b1975-bb33-46c3-84d0-ad7540831dae\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.429434 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19b1975-bb33-46c3-84d0-ad7540831dae-operator-scripts\") pod \"a19b1975-bb33-46c3-84d0-ad7540831dae\" (UID: \"a19b1975-bb33-46c3-84d0-ad7540831dae\") " Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.430022 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19b1975-bb33-46c3-84d0-ad7540831dae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a19b1975-bb33-46c3-84d0-ad7540831dae" (UID: "a19b1975-bb33-46c3-84d0-ad7540831dae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.432166 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/653c3e93-a5bd-4421-9e67-196a1bec03b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.432206 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a19b1975-bb33-46c3-84d0-ad7540831dae-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.432222 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjb6b\" (UniqueName: \"kubernetes.io/projected/653c3e93-a5bd-4421-9e67-196a1bec03b4-kube-api-access-kjb6b\") on node \"crc\" DevicePath \"\"" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.433513 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19b1975-bb33-46c3-84d0-ad7540831dae-kube-api-access-pjdwq" (OuterVolumeSpecName: "kube-api-access-pjdwq") pod "a19b1975-bb33-46c3-84d0-ad7540831dae" (UID: "a19b1975-bb33-46c3-84d0-ad7540831dae"). InnerVolumeSpecName "kube-api-access-pjdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.537423 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjdwq\" (UniqueName: \"kubernetes.io/projected/a19b1975-bb33-46c3-84d0-ad7540831dae-kube-api-access-pjdwq\") on node \"crc\" DevicePath \"\"" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.787361 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-gz528" event={"ID":"653c3e93-a5bd-4421-9e67-196a1bec03b4","Type":"ContainerDied","Data":"09df786dc3080cea0f1ff829ff57b7ba50d50a9be37fb8a232b1f7c9b12b45a5"} Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.787819 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09df786dc3080cea0f1ff829ff57b7ba50d50a9be37fb8a232b1f7c9b12b45a5" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.787399 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-gz528" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.790093 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-50af-account-create-gts6s" event={"ID":"a19b1975-bb33-46c3-84d0-ad7540831dae","Type":"ContainerDied","Data":"45d282ac4928e878eedcde000945e612dbdc3945d89bd66db96cb739f6d09344"} Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.790147 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d282ac4928e878eedcde000945e612dbdc3945d89bd66db96cb739f6d09344" Nov 23 08:55:57 crc kubenswrapper[5028]: I1123 08:55:57.790242 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-50af-account-create-gts6s" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.873820 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-jz2dc"] Nov 23 08:55:58 crc kubenswrapper[5028]: E1123 08:55:58.875461 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653c3e93-a5bd-4421-9e67-196a1bec03b4" containerName="mariadb-database-create" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.875487 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="653c3e93-a5bd-4421-9e67-196a1bec03b4" containerName="mariadb-database-create" Nov 23 08:55:58 crc kubenswrapper[5028]: E1123 08:55:58.875517 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19b1975-bb33-46c3-84d0-ad7540831dae" containerName="mariadb-account-create" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.875527 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19b1975-bb33-46c3-84d0-ad7540831dae" containerName="mariadb-account-create" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.875808 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="653c3e93-a5bd-4421-9e67-196a1bec03b4" containerName="mariadb-database-create" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.875828 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19b1975-bb33-46c3-84d0-ad7540831dae" containerName="mariadb-account-create" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.877277 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.880077 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-28769" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.881242 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.881738 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.887486 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.894450 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jz2dc"] Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.974552 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-config-data\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.975081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-scripts\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.975124 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4p8\" (UniqueName: \"kubernetes.io/projected/d661c439-4a78-4492-b01d-d4f3bc755e8b-kube-api-access-8n4p8\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:58 crc kubenswrapper[5028]: I1123 08:55:58.975454 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-combined-ca-bundle\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.054628 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:55:59 crc kubenswrapper[5028]: E1123 08:55:59.055067 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.079027 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-combined-ca-bundle\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.079427 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-config-data\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.079546 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-scripts\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.079633 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4p8\" (UniqueName: \"kubernetes.io/projected/d661c439-4a78-4492-b01d-d4f3bc755e8b-kube-api-access-8n4p8\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.089180 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-config-data\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.102745 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-scripts\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.103026 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-combined-ca-bundle\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.107616 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4p8\" (UniqueName: \"kubernetes.io/projected/d661c439-4a78-4492-b01d-d4f3bc755e8b-kube-api-access-8n4p8\") pod \"aodh-db-sync-jz2dc\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.202770 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.710790 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-jz2dc"] Nov 23 08:55:59 crc kubenswrapper[5028]: I1123 08:55:59.822482 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jz2dc" event={"ID":"d661c439-4a78-4492-b01d-d4f3bc755e8b","Type":"ContainerStarted","Data":"84c855b37b31947692bfb171b468487a40a49f8bdd123bb6c482b6f30a0257c8"} Nov 23 08:56:04 crc kubenswrapper[5028]: I1123 08:56:04.891732 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jz2dc" event={"ID":"d661c439-4a78-4492-b01d-d4f3bc755e8b","Type":"ContainerStarted","Data":"c36ff0185ef9c46e25ea1bd28530d655497cae111dbf09419cc01846e4abdf4c"} Nov 23 08:56:04 crc kubenswrapper[5028]: I1123 08:56:04.920604 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-jz2dc" podStartSLOduration=2.086862653 podStartE2EDuration="6.920585157s" podCreationTimestamp="2025-11-23 08:55:58 +0000 UTC" firstStartedPulling="2025-11-23 08:55:59.718700994 +0000 UTC m=+7543.416105773" lastFinishedPulling="2025-11-23 08:56:04.552423488 +0000 UTC m=+7548.249828277" observedRunningTime="2025-11-23 08:56:04.918554927 +0000 UTC m=+7548.615959706" watchObservedRunningTime="2025-11-23 08:56:04.920585157 +0000 UTC m=+7548.617989936" Nov 23 08:56:07 crc kubenswrapper[5028]: I1123 08:56:07.529417 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 08:56:07 crc kubenswrapper[5028]: I1123 08:56:07.944672 5028 generic.go:334] "Generic (PLEG): container finished" podID="d661c439-4a78-4492-b01d-d4f3bc755e8b" containerID="c36ff0185ef9c46e25ea1bd28530d655497cae111dbf09419cc01846e4abdf4c" exitCode=0 Nov 23 08:56:07 crc kubenswrapper[5028]: I1123 08:56:07.944734 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jz2dc" event={"ID":"d661c439-4a78-4492-b01d-d4f3bc755e8b","Type":"ContainerDied","Data":"c36ff0185ef9c46e25ea1bd28530d655497cae111dbf09419cc01846e4abdf4c"} Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.336271 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.535919 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-scripts\") pod \"d661c439-4a78-4492-b01d-d4f3bc755e8b\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.536339 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4p8\" (UniqueName: \"kubernetes.io/projected/d661c439-4a78-4492-b01d-d4f3bc755e8b-kube-api-access-8n4p8\") pod \"d661c439-4a78-4492-b01d-d4f3bc755e8b\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.536363 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-config-data\") pod \"d661c439-4a78-4492-b01d-d4f3bc755e8b\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.536391 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-combined-ca-bundle\") pod \"d661c439-4a78-4492-b01d-d4f3bc755e8b\" (UID: \"d661c439-4a78-4492-b01d-d4f3bc755e8b\") " Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.542337 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d661c439-4a78-4492-b01d-d4f3bc755e8b-kube-api-access-8n4p8" (OuterVolumeSpecName: "kube-api-access-8n4p8") pod "d661c439-4a78-4492-b01d-d4f3bc755e8b" (UID: "d661c439-4a78-4492-b01d-d4f3bc755e8b"). InnerVolumeSpecName "kube-api-access-8n4p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.543515 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-scripts" (OuterVolumeSpecName: "scripts") pod "d661c439-4a78-4492-b01d-d4f3bc755e8b" (UID: "d661c439-4a78-4492-b01d-d4f3bc755e8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.566621 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-config-data" (OuterVolumeSpecName: "config-data") pod "d661c439-4a78-4492-b01d-d4f3bc755e8b" (UID: "d661c439-4a78-4492-b01d-d4f3bc755e8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.568547 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d661c439-4a78-4492-b01d-d4f3bc755e8b" (UID: "d661c439-4a78-4492-b01d-d4f3bc755e8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.639289 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4p8\" (UniqueName: \"kubernetes.io/projected/d661c439-4a78-4492-b01d-d4f3bc755e8b-kube-api-access-8n4p8\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.639627 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.639783 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.639864 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d661c439-4a78-4492-b01d-d4f3bc755e8b-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.980704 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-jz2dc" event={"ID":"d661c439-4a78-4492-b01d-d4f3bc755e8b","Type":"ContainerDied","Data":"84c855b37b31947692bfb171b468487a40a49f8bdd123bb6c482b6f30a0257c8"} Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.980776 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84c855b37b31947692bfb171b468487a40a49f8bdd123bb6c482b6f30a0257c8" Nov 23 08:56:09 crc kubenswrapper[5028]: I1123 08:56:09.980891 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-jz2dc" Nov 23 08:56:10 crc kubenswrapper[5028]: I1123 08:56:10.053118 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:56:10 crc kubenswrapper[5028]: E1123 08:56:10.053390 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.002447 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 23 08:56:14 crc kubenswrapper[5028]: E1123 08:56:14.004147 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d661c439-4a78-4492-b01d-d4f3bc755e8b" containerName="aodh-db-sync" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.004176 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d661c439-4a78-4492-b01d-d4f3bc755e8b" containerName="aodh-db-sync" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.004543 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d661c439-4a78-4492-b01d-d4f3bc755e8b" containerName="aodh-db-sync" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.009614 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.015466 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.015904 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.018226 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-28769" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.027418 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.054109 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-scripts\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.054571 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.054839 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvrk\" (UniqueName: \"kubernetes.io/projected/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-kube-api-access-9rvrk\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.054893 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-config-data\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.157306 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.157461 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rvrk\" (UniqueName: \"kubernetes.io/projected/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-kube-api-access-9rvrk\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.157498 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-config-data\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.157552 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-scripts\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.166048 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-config-data\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.166202 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.172381 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-scripts\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.190532 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rvrk\" (UniqueName: \"kubernetes.io/projected/ed909c0c-2d7e-46ab-9c04-5fa86f5884e6-kube-api-access-9rvrk\") pod \"aodh-0\" (UID: \"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6\") " pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.348936 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 23 08:56:14 crc kubenswrapper[5028]: I1123 08:56:14.874797 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 23 08:56:15 crc kubenswrapper[5028]: I1123 08:56:15.048548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6","Type":"ContainerStarted","Data":"6ffde4bd1de33fc652ccfc74579ad3ee057a02a421b6586d7f81a47146a7675c"} Nov 23 08:56:15 crc kubenswrapper[5028]: I1123 08:56:15.258430 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:56:15 crc kubenswrapper[5028]: I1123 08:56:15.259830 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="sg-core" containerID="cri-o://6317ebb60e804b28de2ee438b9a33162e4091c86c13427568ac694c60e1e9a44" gracePeriod=30 Nov 23 08:56:15 crc kubenswrapper[5028]: I1123 08:56:15.261123 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-notification-agent" containerID="cri-o://954916c27b2aea755c24dad840ca98a0d5be758f18090531af57f5252f2b9448" gracePeriod=30 Nov 23 08:56:15 crc kubenswrapper[5028]: I1123 08:56:15.261211 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="proxy-httpd" containerID="cri-o://53422ddc9c67458b25f3f525f1b5fe6afaff557cb223cfc071cc2676e1996719" gracePeriod=30 Nov 23 08:56:15 crc kubenswrapper[5028]: I1123 08:56:15.261475 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-central-agent" containerID="cri-o://a793a0c42c79ed172baf8190aa784120184082fc86fabd04ee5e2d8c3a517ec8" gracePeriod=30 Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.073792 5028 generic.go:334] "Generic (PLEG): container finished" podID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerID="53422ddc9c67458b25f3f525f1b5fe6afaff557cb223cfc071cc2676e1996719" exitCode=0 Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.074142 5028 generic.go:334] "Generic (PLEG): container finished" podID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerID="6317ebb60e804b28de2ee438b9a33162e4091c86c13427568ac694c60e1e9a44" exitCode=2 Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.074151 5028 generic.go:334] "Generic (PLEG): container finished" podID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerID="a793a0c42c79ed172baf8190aa784120184082fc86fabd04ee5e2d8c3a517ec8" exitCode=0 Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.074038 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerDied","Data":"53422ddc9c67458b25f3f525f1b5fe6afaff557cb223cfc071cc2676e1996719"} Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.074220 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerDied","Data":"6317ebb60e804b28de2ee438b9a33162e4091c86c13427568ac694c60e1e9a44"} Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.074241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerDied","Data":"a793a0c42c79ed172baf8190aa784120184082fc86fabd04ee5e2d8c3a517ec8"} Nov 23 08:56:16 crc kubenswrapper[5028]: I1123 08:56:16.078706 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6","Type":"ContainerStarted","Data":"b43691bf3babd16575de2ca9f70eaf60741bd44eb9eda5e50bfc2c45fea5d6a2"} Nov 23 08:56:17 crc kubenswrapper[5028]: I1123 08:56:17.102546 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6","Type":"ContainerStarted","Data":"2f7eb5d184585054463aeff8970b25a34026199dc51f78e2616c8e8e622aef76"} Nov 23 08:56:18 crc kubenswrapper[5028]: I1123 08:56:18.118748 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6","Type":"ContainerStarted","Data":"347357f8b7915552598c2c79b92b0033a06aba4e1cdf9d9ec668b9923ba81a41"} Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.151686 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"ed909c0c-2d7e-46ab-9c04-5fa86f5884e6","Type":"ContainerStarted","Data":"18a0d7db7974c643874b2973870942eca13ed8ae4fa627ffb52ddeb0c1442648"} Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.156484 5028 generic.go:334] "Generic (PLEG): container finished" podID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerID="954916c27b2aea755c24dad840ca98a0d5be758f18090531af57f5252f2b9448" exitCode=0 Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.156567 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerDied","Data":"954916c27b2aea755c24dad840ca98a0d5be758f18090531af57f5252f2b9448"} Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.176232 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.954990994 podStartE2EDuration="7.176205361s" podCreationTimestamp="2025-11-23 08:56:13 +0000 UTC" firstStartedPulling="2025-11-23 08:56:14.910102186 +0000 UTC m=+7558.607506965" lastFinishedPulling="2025-11-23 08:56:19.131316543 +0000 UTC m=+7562.828721332" observedRunningTime="2025-11-23 08:56:20.172234653 +0000 UTC m=+7563.869639442" watchObservedRunningTime="2025-11-23 08:56:20.176205361 +0000 UTC m=+7563.873610140" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.583055 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.667898 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-config-data\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.667988 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl28t\" (UniqueName: \"kubernetes.io/projected/521d90b6-8e3d-4851-98b4-fec7ab2daf20-kube-api-access-fl28t\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.668086 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-combined-ca-bundle\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.668110 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-run-httpd\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.668156 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-scripts\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.668187 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-sg-core-conf-yaml\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.668312 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-log-httpd\") pod \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\" (UID: \"521d90b6-8e3d-4851-98b4-fec7ab2daf20\") " Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.671766 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.671806 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.678484 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-scripts" (OuterVolumeSpecName: "scripts") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.678630 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521d90b6-8e3d-4851-98b4-fec7ab2daf20-kube-api-access-fl28t" (OuterVolumeSpecName: "kube-api-access-fl28t") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "kube-api-access-fl28t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.711657 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.770565 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.770599 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl28t\" (UniqueName: \"kubernetes.io/projected/521d90b6-8e3d-4851-98b4-fec7ab2daf20-kube-api-access-fl28t\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.770610 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/521d90b6-8e3d-4851-98b4-fec7ab2daf20-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.770620 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.770630 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.798640 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.813056 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-config-data" (OuterVolumeSpecName: "config-data") pod "521d90b6-8e3d-4851-98b4-fec7ab2daf20" (UID: "521d90b6-8e3d-4851-98b4-fec7ab2daf20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.872426 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:20 crc kubenswrapper[5028]: I1123 08:56:20.872465 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521d90b6-8e3d-4851-98b4-fec7ab2daf20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.173557 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"521d90b6-8e3d-4851-98b4-fec7ab2daf20","Type":"ContainerDied","Data":"196ae62500fb8baba7609aa33b662400dfe86dff1ca4a393deb6a044bb0293f4"} Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.173632 5028 scope.go:117] "RemoveContainer" containerID="53422ddc9c67458b25f3f525f1b5fe6afaff557cb223cfc071cc2676e1996719" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.173839 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.225833 5028 scope.go:117] "RemoveContainer" containerID="6317ebb60e804b28de2ee438b9a33162e4091c86c13427568ac694c60e1e9a44" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.226075 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.243844 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.253344 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:56:21 crc kubenswrapper[5028]: E1123 08:56:21.253821 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="proxy-httpd" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.253835 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="proxy-httpd" Nov 23 08:56:21 crc kubenswrapper[5028]: E1123 08:56:21.253869 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-notification-agent" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.253875 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-notification-agent" Nov 23 08:56:21 crc kubenswrapper[5028]: E1123 08:56:21.253891 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="sg-core" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.254131 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="sg-core" Nov 23 08:56:21 crc kubenswrapper[5028]: E1123 08:56:21.254149 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-central-agent" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.254155 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-central-agent" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.254378 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="sg-core" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.254402 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-notification-agent" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.254423 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="ceilometer-central-agent" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.254439 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" containerName="proxy-httpd" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.256316 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.259928 5028 scope.go:117] "RemoveContainer" containerID="954916c27b2aea755c24dad840ca98a0d5be758f18090531af57f5252f2b9448" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.260075 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.260288 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.265055 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.310917 5028 scope.go:117] "RemoveContainer" containerID="a793a0c42c79ed172baf8190aa784120184082fc86fabd04ee5e2d8c3a517ec8" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.387093 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.387152 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/2ecc121e-d13d-469a-9e62-083e2ec8ad96-kube-api-access-jcf74\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.387359 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-run-httpd\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.387550 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-log-httpd\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.387872 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-scripts\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.387985 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.388348 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-config-data\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491352 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-config-data\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491414 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491444 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/2ecc121e-d13d-469a-9e62-083e2ec8ad96-kube-api-access-jcf74\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491506 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-run-httpd\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491564 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-log-httpd\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491660 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-scripts\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.491701 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.492527 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-run-httpd\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.497410 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-log-httpd\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.497702 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.499259 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.500060 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-config-data\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.506508 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-scripts\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.523801 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/2ecc121e-d13d-469a-9e62-083e2ec8ad96-kube-api-access-jcf74\") pod \"ceilometer-0\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " pod="openstack/ceilometer-0" Nov 23 08:56:21 crc kubenswrapper[5028]: I1123 08:56:21.588623 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:56:22 crc kubenswrapper[5028]: I1123 08:56:22.054124 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:56:22 crc kubenswrapper[5028]: E1123 08:56:22.056081 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:56:22 crc kubenswrapper[5028]: I1123 08:56:22.188391 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:56:22 crc kubenswrapper[5028]: W1123 08:56:22.197876 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ecc121e_d13d_469a_9e62_083e2ec8ad96.slice/crio-657a1277d6a79d6d25059504125298d6a91733227ad7f9e2a7dd5f0a0808913a WatchSource:0}: Error finding container 657a1277d6a79d6d25059504125298d6a91733227ad7f9e2a7dd5f0a0808913a: Status 404 returned error can't find the container with id 657a1277d6a79d6d25059504125298d6a91733227ad7f9e2a7dd5f0a0808913a Nov 23 08:56:23 crc kubenswrapper[5028]: I1123 08:56:23.081347 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521d90b6-8e3d-4851-98b4-fec7ab2daf20" path="/var/lib/kubelet/pods/521d90b6-8e3d-4851-98b4-fec7ab2daf20/volumes" Nov 23 08:56:23 crc kubenswrapper[5028]: I1123 08:56:23.203538 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerStarted","Data":"28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb"} Nov 23 08:56:23 crc kubenswrapper[5028]: I1123 08:56:23.203604 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerStarted","Data":"a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d"} Nov 23 08:56:23 crc kubenswrapper[5028]: I1123 08:56:23.203618 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerStarted","Data":"657a1277d6a79d6d25059504125298d6a91733227ad7f9e2a7dd5f0a0808913a"} Nov 23 08:56:24 crc kubenswrapper[5028]: I1123 08:56:24.215601 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerStarted","Data":"bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59"} Nov 23 08:56:25 crc kubenswrapper[5028]: I1123 08:56:25.231241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerStarted","Data":"7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75"} Nov 23 08:56:25 crc kubenswrapper[5028]: I1123 08:56:25.232052 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 08:56:25 crc kubenswrapper[5028]: I1123 08:56:25.269364 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.538879498 podStartE2EDuration="4.269218661s" podCreationTimestamp="2025-11-23 08:56:21 +0000 UTC" firstStartedPulling="2025-11-23 08:56:22.200363029 +0000 UTC m=+7565.897767808" lastFinishedPulling="2025-11-23 08:56:24.930702182 +0000 UTC m=+7568.628106971" observedRunningTime="2025-11-23 08:56:25.255641476 +0000 UTC m=+7568.953046255" watchObservedRunningTime="2025-11-23 08:56:25.269218661 +0000 UTC m=+7568.966623440" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.374681 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-hnsp5"] Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.377495 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.386602 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hnsp5"] Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.429401 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mr9f\" (UniqueName: \"kubernetes.io/projected/d38fb54a-51b5-4733-8437-2cc1398a5938-kube-api-access-5mr9f\") pod \"manila-db-create-hnsp5\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.429532 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38fb54a-51b5-4733-8437-2cc1398a5938-operator-scripts\") pod \"manila-db-create-hnsp5\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.478516 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-9fc2-account-create-vz4kk"] Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.480604 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.484379 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.495170 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-9fc2-account-create-vz4kk"] Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.531546 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38fb54a-51b5-4733-8437-2cc1398a5938-operator-scripts\") pod \"manila-db-create-hnsp5\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.531700 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czv9q\" (UniqueName: \"kubernetes.io/projected/cf27fcd2-e3a8-42ca-aab7-d990b6178677-kube-api-access-czv9q\") pod \"manila-9fc2-account-create-vz4kk\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.531860 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf27fcd2-e3a8-42ca-aab7-d990b6178677-operator-scripts\") pod \"manila-9fc2-account-create-vz4kk\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.531907 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mr9f\" (UniqueName: \"kubernetes.io/projected/d38fb54a-51b5-4733-8437-2cc1398a5938-kube-api-access-5mr9f\") pod \"manila-db-create-hnsp5\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.533330 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38fb54a-51b5-4733-8437-2cc1398a5938-operator-scripts\") pod \"manila-db-create-hnsp5\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.556630 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mr9f\" (UniqueName: \"kubernetes.io/projected/d38fb54a-51b5-4733-8437-2cc1398a5938-kube-api-access-5mr9f\") pod \"manila-db-create-hnsp5\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.633739 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf27fcd2-e3a8-42ca-aab7-d990b6178677-operator-scripts\") pod \"manila-9fc2-account-create-vz4kk\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.633891 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czv9q\" (UniqueName: \"kubernetes.io/projected/cf27fcd2-e3a8-42ca-aab7-d990b6178677-kube-api-access-czv9q\") pod \"manila-9fc2-account-create-vz4kk\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.634594 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf27fcd2-e3a8-42ca-aab7-d990b6178677-operator-scripts\") pod \"manila-9fc2-account-create-vz4kk\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.653741 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czv9q\" (UniqueName: \"kubernetes.io/projected/cf27fcd2-e3a8-42ca-aab7-d990b6178677-kube-api-access-czv9q\") pod \"manila-9fc2-account-create-vz4kk\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.711633 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:26 crc kubenswrapper[5028]: I1123 08:56:26.821788 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:27 crc kubenswrapper[5028]: I1123 08:56:27.305681 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hnsp5"] Nov 23 08:56:27 crc kubenswrapper[5028]: W1123 08:56:27.320467 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd38fb54a_51b5_4733_8437_2cc1398a5938.slice/crio-e5c03865b67b286fec0d9074738a6af102e27f240c7231ebf3b94f6571880e99 WatchSource:0}: Error finding container e5c03865b67b286fec0d9074738a6af102e27f240c7231ebf3b94f6571880e99: Status 404 returned error can't find the container with id e5c03865b67b286fec0d9074738a6af102e27f240c7231ebf3b94f6571880e99 Nov 23 08:56:27 crc kubenswrapper[5028]: I1123 08:56:27.447062 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-9fc2-account-create-vz4kk"] Nov 23 08:56:28 crc kubenswrapper[5028]: I1123 08:56:28.302420 5028 generic.go:334] "Generic (PLEG): container finished" podID="d38fb54a-51b5-4733-8437-2cc1398a5938" containerID="f2d2b0e17716864f5361cad16a81ff5f84c1e4914427cd4d0bb85040272cb66a" exitCode=0 Nov 23 08:56:28 crc kubenswrapper[5028]: I1123 08:56:28.302876 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hnsp5" event={"ID":"d38fb54a-51b5-4733-8437-2cc1398a5938","Type":"ContainerDied","Data":"f2d2b0e17716864f5361cad16a81ff5f84c1e4914427cd4d0bb85040272cb66a"} Nov 23 08:56:28 crc kubenswrapper[5028]: I1123 08:56:28.302910 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hnsp5" event={"ID":"d38fb54a-51b5-4733-8437-2cc1398a5938","Type":"ContainerStarted","Data":"e5c03865b67b286fec0d9074738a6af102e27f240c7231ebf3b94f6571880e99"} Nov 23 08:56:28 crc kubenswrapper[5028]: I1123 08:56:28.313665 5028 generic.go:334] "Generic (PLEG): container finished" podID="cf27fcd2-e3a8-42ca-aab7-d990b6178677" containerID="704089c37b0dfcf36a6eb88fc3f25c7dd4722b20e681f3c400b009c91a9015c0" exitCode=0 Nov 23 08:56:28 crc kubenswrapper[5028]: I1123 08:56:28.313716 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-9fc2-account-create-vz4kk" event={"ID":"cf27fcd2-e3a8-42ca-aab7-d990b6178677","Type":"ContainerDied","Data":"704089c37b0dfcf36a6eb88fc3f25c7dd4722b20e681f3c400b009c91a9015c0"} Nov 23 08:56:28 crc kubenswrapper[5028]: I1123 08:56:28.313742 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-9fc2-account-create-vz4kk" event={"ID":"cf27fcd2-e3a8-42ca-aab7-d990b6178677","Type":"ContainerStarted","Data":"6da191c8d4f37c009e5423f33e24a26952fed31c19fb3bac5ef718b759775257"} Nov 23 08:56:29 crc kubenswrapper[5028]: I1123 08:56:29.079577 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2lkz7"] Nov 23 08:56:29 crc kubenswrapper[5028]: I1123 08:56:29.094807 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2lkz7"] Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:29.526558 5028 scope.go:117] "RemoveContainer" containerID="23f45590c1c197ab642a89db4f0190cb22f329e2b3e8fded1b4be393f6d89e62" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:29.884505 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.046194 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4b15-account-create-67t6s"] Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.055146 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ppnb4"] Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.065850 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4b15-account-create-67t6s"] Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.071270 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf27fcd2-e3a8-42ca-aab7-d990b6178677-operator-scripts\") pod \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.071542 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czv9q\" (UniqueName: \"kubernetes.io/projected/cf27fcd2-e3a8-42ca-aab7-d990b6178677-kube-api-access-czv9q\") pod \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\" (UID: \"cf27fcd2-e3a8-42ca-aab7-d990b6178677\") " Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.072160 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf27fcd2-e3a8-42ca-aab7-d990b6178677-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf27fcd2-e3a8-42ca-aab7-d990b6178677" (UID: "cf27fcd2-e3a8-42ca-aab7-d990b6178677"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.072707 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf27fcd2-e3a8-42ca-aab7-d990b6178677-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.075356 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ppnb4"] Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.083266 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf27fcd2-e3a8-42ca-aab7-d990b6178677-kube-api-access-czv9q" (OuterVolumeSpecName: "kube-api-access-czv9q") pod "cf27fcd2-e3a8-42ca-aab7-d990b6178677" (UID: "cf27fcd2-e3a8-42ca-aab7-d990b6178677"). InnerVolumeSpecName "kube-api-access-czv9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.177549 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czv9q\" (UniqueName: \"kubernetes.io/projected/cf27fcd2-e3a8-42ca-aab7-d990b6178677-kube-api-access-czv9q\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.340926 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hnsp5" event={"ID":"d38fb54a-51b5-4733-8437-2cc1398a5938","Type":"ContainerDied","Data":"e5c03865b67b286fec0d9074738a6af102e27f240c7231ebf3b94f6571880e99"} Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.341232 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5c03865b67b286fec0d9074738a6af102e27f240c7231ebf3b94f6571880e99" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.348191 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-9fc2-account-create-vz4kk" event={"ID":"cf27fcd2-e3a8-42ca-aab7-d990b6178677","Type":"ContainerDied","Data":"6da191c8d4f37c009e5423f33e24a26952fed31c19fb3bac5ef718b759775257"} Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.348359 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da191c8d4f37c009e5423f33e24a26952fed31c19fb3bac5ef718b759775257" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.348766 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-9fc2-account-create-vz4kk" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.393911 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.585490 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mr9f\" (UniqueName: \"kubernetes.io/projected/d38fb54a-51b5-4733-8437-2cc1398a5938-kube-api-access-5mr9f\") pod \"d38fb54a-51b5-4733-8437-2cc1398a5938\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.585883 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38fb54a-51b5-4733-8437-2cc1398a5938-operator-scripts\") pod \"d38fb54a-51b5-4733-8437-2cc1398a5938\" (UID: \"d38fb54a-51b5-4733-8437-2cc1398a5938\") " Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.587253 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d38fb54a-51b5-4733-8437-2cc1398a5938-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d38fb54a-51b5-4733-8437-2cc1398a5938" (UID: "d38fb54a-51b5-4733-8437-2cc1398a5938"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.595239 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d38fb54a-51b5-4733-8437-2cc1398a5938-kube-api-access-5mr9f" (OuterVolumeSpecName: "kube-api-access-5mr9f") pod "d38fb54a-51b5-4733-8437-2cc1398a5938" (UID: "d38fb54a-51b5-4733-8437-2cc1398a5938"). InnerVolumeSpecName "kube-api-access-5mr9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.688644 5028 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d38fb54a-51b5-4733-8437-2cc1398a5938-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:30 crc kubenswrapper[5028]: I1123 08:56:30.688683 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mr9f\" (UniqueName: \"kubernetes.io/projected/d38fb54a-51b5-4733-8437-2cc1398a5938-kube-api-access-5mr9f\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.036508 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-283a-account-create-2pzrs"] Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.047122 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-n8s49"] Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.068166 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26449592-f0d6-466b-bdcf-ebd15f73bf27" path="/var/lib/kubelet/pods/26449592-f0d6-466b-bdcf-ebd15f73bf27/volumes" Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.069126 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af8f82a-ad6a-401b-880d-cd612c1fd9a6" path="/var/lib/kubelet/pods/4af8f82a-ad6a-401b-880d-cd612c1fd9a6/volumes" Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.070472 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79b109a7-3cb4-4f58-81b7-b6c9b44c1657" path="/var/lib/kubelet/pods/79b109a7-3cb4-4f58-81b7-b6c9b44c1657/volumes" Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.071580 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-283a-account-create-2pzrs"] Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.072860 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-n8s49"] Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.085648 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1148-account-create-78mng"] Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.095769 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1148-account-create-78mng"] Nov 23 08:56:31 crc kubenswrapper[5028]: I1123 08:56:31.358908 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hnsp5" Nov 23 08:56:33 crc kubenswrapper[5028]: I1123 08:56:33.053565 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:56:33 crc kubenswrapper[5028]: E1123 08:56:33.053889 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:56:33 crc kubenswrapper[5028]: I1123 08:56:33.073795 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478d4566-77d2-4d75-91d0-c66ee03fbbdd" path="/var/lib/kubelet/pods/478d4566-77d2-4d75-91d0-c66ee03fbbdd/volumes" Nov 23 08:56:33 crc kubenswrapper[5028]: I1123 08:56:33.074945 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ff0ece-c044-4e7b-9efb-83805ed11901" path="/var/lib/kubelet/pods/69ff0ece-c044-4e7b-9efb-83805ed11901/volumes" Nov 23 08:56:33 crc kubenswrapper[5028]: I1123 08:56:33.075983 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3ee139-2d5c-445c-aca1-2b7468c4ffe2" path="/var/lib/kubelet/pods/cc3ee139-2d5c-445c-aca1-2b7468c4ffe2/volumes" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.722090 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-lxqwl"] Nov 23 08:56:36 crc kubenswrapper[5028]: E1123 08:56:36.723609 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf27fcd2-e3a8-42ca-aab7-d990b6178677" containerName="mariadb-account-create" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.723629 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf27fcd2-e3a8-42ca-aab7-d990b6178677" containerName="mariadb-account-create" Nov 23 08:56:36 crc kubenswrapper[5028]: E1123 08:56:36.723663 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d38fb54a-51b5-4733-8437-2cc1398a5938" containerName="mariadb-database-create" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.723670 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d38fb54a-51b5-4733-8437-2cc1398a5938" containerName="mariadb-database-create" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.723916 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf27fcd2-e3a8-42ca-aab7-d990b6178677" containerName="mariadb-account-create" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.723936 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d38fb54a-51b5-4733-8437-2cc1398a5938" containerName="mariadb-database-create" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.725025 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.727843 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-w96m4" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.727860 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.739044 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-lxqwl"] Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.877761 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-combined-ca-bundle\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.878144 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gchpd\" (UniqueName: \"kubernetes.io/projected/27a7d9f4-c4f1-4be6-9092-b598185c1fda-kube-api-access-gchpd\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.878383 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-config-data\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.878605 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-job-config-data\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.999156 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-job-config-data\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.999691 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-combined-ca-bundle\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:36 crc kubenswrapper[5028]: I1123 08:56:36.999760 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gchpd\" (UniqueName: \"kubernetes.io/projected/27a7d9f4-c4f1-4be6-9092-b598185c1fda-kube-api-access-gchpd\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.000030 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-config-data\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.012490 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-job-config-data\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.012539 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-combined-ca-bundle\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.012684 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-config-data\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.029788 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gchpd\" (UniqueName: \"kubernetes.io/projected/27a7d9f4-c4f1-4be6-9092-b598185c1fda-kube-api-access-gchpd\") pod \"manila-db-sync-lxqwl\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.046745 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.668002 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-lxqwl"] Nov 23 08:56:37 crc kubenswrapper[5028]: W1123 08:56:37.674661 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27a7d9f4_c4f1_4be6_9092_b598185c1fda.slice/crio-494105eb42a6fad28b45ec6702c2105a87d6131eb804ac37b841fea77c7aac1a WatchSource:0}: Error finding container 494105eb42a6fad28b45ec6702c2105a87d6131eb804ac37b841fea77c7aac1a: Status 404 returned error can't find the container with id 494105eb42a6fad28b45ec6702c2105a87d6131eb804ac37b841fea77c7aac1a Nov 23 08:56:37 crc kubenswrapper[5028]: I1123 08:56:37.677533 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 08:56:38 crc kubenswrapper[5028]: I1123 08:56:38.456390 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-lxqwl" event={"ID":"27a7d9f4-c4f1-4be6-9092-b598185c1fda","Type":"ContainerStarted","Data":"494105eb42a6fad28b45ec6702c2105a87d6131eb804ac37b841fea77c7aac1a"} Nov 23 08:56:44 crc kubenswrapper[5028]: I1123 08:56:44.538613 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-lxqwl" event={"ID":"27a7d9f4-c4f1-4be6-9092-b598185c1fda","Type":"ContainerStarted","Data":"757470b6a71b9b5c7a430871df2a3f986af2b675106b7504e1069c2b8ec03c34"} Nov 23 08:56:44 crc kubenswrapper[5028]: I1123 08:56:44.557968 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-lxqwl" podStartSLOduration=2.91766172 podStartE2EDuration="8.557912969s" podCreationTimestamp="2025-11-23 08:56:36 +0000 UTC" firstStartedPulling="2025-11-23 08:56:37.677205534 +0000 UTC m=+7581.374610313" lastFinishedPulling="2025-11-23 08:56:43.317456773 +0000 UTC m=+7587.014861562" observedRunningTime="2025-11-23 08:56:44.552856724 +0000 UTC m=+7588.250261503" watchObservedRunningTime="2025-11-23 08:56:44.557912969 +0000 UTC m=+7588.255317748" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.053330 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:56:46 crc kubenswrapper[5028]: E1123 08:56:46.054037 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.481990 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9tk6"] Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.485851 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.505109 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9tk6"] Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.568239 5028 generic.go:334] "Generic (PLEG): container finished" podID="27a7d9f4-c4f1-4be6-9092-b598185c1fda" containerID="757470b6a71b9b5c7a430871df2a3f986af2b675106b7504e1069c2b8ec03c34" exitCode=0 Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.568318 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-lxqwl" event={"ID":"27a7d9f4-c4f1-4be6-9092-b598185c1fda","Type":"ContainerDied","Data":"757470b6a71b9b5c7a430871df2a3f986af2b675106b7504e1069c2b8ec03c34"} Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.578822 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-catalog-content\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.579132 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6tw\" (UniqueName: \"kubernetes.io/projected/0db42cee-55c3-4bb2-8f43-9374ba23393b-kube-api-access-fp6tw\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.579701 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-utilities\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.681914 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-utilities\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.682068 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-catalog-content\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.682154 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6tw\" (UniqueName: \"kubernetes.io/projected/0db42cee-55c3-4bb2-8f43-9374ba23393b-kube-api-access-fp6tw\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.682663 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-utilities\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.682766 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-catalog-content\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.711645 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6tw\" (UniqueName: \"kubernetes.io/projected/0db42cee-55c3-4bb2-8f43-9374ba23393b-kube-api-access-fp6tw\") pod \"redhat-operators-z9tk6\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:46 crc kubenswrapper[5028]: I1123 08:56:46.847978 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:47 crc kubenswrapper[5028]: I1123 08:56:47.349889 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9tk6"] Nov 23 08:56:47 crc kubenswrapper[5028]: I1123 08:56:47.585108 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerStarted","Data":"1904fd007d67eedd016059e0c9585621a073551013ef15654634c26391bc0b5a"} Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.086641 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.223242 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-config-data\") pod \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.223324 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-job-config-data\") pod \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.223352 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-combined-ca-bundle\") pod \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.223407 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gchpd\" (UniqueName: \"kubernetes.io/projected/27a7d9f4-c4f1-4be6-9092-b598185c1fda-kube-api-access-gchpd\") pod \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\" (UID: \"27a7d9f4-c4f1-4be6-9092-b598185c1fda\") " Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.230834 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "27a7d9f4-c4f1-4be6-9092-b598185c1fda" (UID: "27a7d9f4-c4f1-4be6-9092-b598185c1fda"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.236215 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a7d9f4-c4f1-4be6-9092-b598185c1fda-kube-api-access-gchpd" (OuterVolumeSpecName: "kube-api-access-gchpd") pod "27a7d9f4-c4f1-4be6-9092-b598185c1fda" (UID: "27a7d9f4-c4f1-4be6-9092-b598185c1fda"). InnerVolumeSpecName "kube-api-access-gchpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.241209 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-config-data" (OuterVolumeSpecName: "config-data") pod "27a7d9f4-c4f1-4be6-9092-b598185c1fda" (UID: "27a7d9f4-c4f1-4be6-9092-b598185c1fda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.270372 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27a7d9f4-c4f1-4be6-9092-b598185c1fda" (UID: "27a7d9f4-c4f1-4be6-9092-b598185c1fda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.325590 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.325635 5028 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.325668 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a7d9f4-c4f1-4be6-9092-b598185c1fda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.325681 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gchpd\" (UniqueName: \"kubernetes.io/projected/27a7d9f4-c4f1-4be6-9092-b598185c1fda-kube-api-access-gchpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.596741 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-lxqwl" event={"ID":"27a7d9f4-c4f1-4be6-9092-b598185c1fda","Type":"ContainerDied","Data":"494105eb42a6fad28b45ec6702c2105a87d6131eb804ac37b841fea77c7aac1a"} Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.596812 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="494105eb42a6fad28b45ec6702c2105a87d6131eb804ac37b841fea77c7aac1a" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.596759 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-lxqwl" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.598660 5028 generic.go:334] "Generic (PLEG): container finished" podID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerID="6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8" exitCode=0 Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.598711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerDied","Data":"6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8"} Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.916254 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 23 08:56:48 crc kubenswrapper[5028]: E1123 08:56:48.917197 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a7d9f4-c4f1-4be6-9092-b598185c1fda" containerName="manila-db-sync" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.917216 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a7d9f4-c4f1-4be6-9092-b598185c1fda" containerName="manila-db-sync" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.917435 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a7d9f4-c4f1-4be6-9092-b598185c1fda" containerName="manila-db-sync" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.918661 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.927644 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-w96m4" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.928669 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.929083 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.929348 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.929887 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.931940 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.933556 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.963137 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 23 08:56:48 crc kubenswrapper[5028]: I1123 08:56:48.988005 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.051827 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.051979 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.052008 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-config-data\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.052048 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-config-data\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054152 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054241 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07392e5e-2fb2-4582-baf3-94393eed0373-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054279 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054304 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-scripts\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054341 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-scripts\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054540 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc6b3e97-3b88-45f9-9893-160420459404-ceph\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054602 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xkx\" (UniqueName: \"kubernetes.io/projected/dc6b3e97-3b88-45f9-9893-160420459404-kube-api-access-42xkx\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054641 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5k98\" (UniqueName: \"kubernetes.io/projected/07392e5e-2fb2-4582-baf3-94393eed0373-kube-api-access-j5k98\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054666 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc6b3e97-3b88-45f9-9893-160420459404-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.054821 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc6b3e97-3b88-45f9-9893-160420459404-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.085137 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849c8dc485-cb7jv"] Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.094806 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.104388 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849c8dc485-cb7jv"] Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.160761 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.160865 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07392e5e-2fb2-4582-baf3-94393eed0373-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.160887 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.160907 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-scripts\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.160934 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-scripts\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161052 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc6b3e97-3b88-45f9-9893-160420459404-ceph\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161098 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42xkx\" (UniqueName: \"kubernetes.io/projected/dc6b3e97-3b88-45f9-9893-160420459404-kube-api-access-42xkx\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161126 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc6b3e97-3b88-45f9-9893-160420459404-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161147 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5k98\" (UniqueName: \"kubernetes.io/projected/07392e5e-2fb2-4582-baf3-94393eed0373-kube-api-access-j5k98\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161167 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc6b3e97-3b88-45f9-9893-160420459404-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161203 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161255 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161273 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-config-data\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-config-data\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.161772 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07392e5e-2fb2-4582-baf3-94393eed0373-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.162446 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc6b3e97-3b88-45f9-9893-160420459404-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.163853 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/dc6b3e97-3b88-45f9-9893-160420459404-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.175256 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-config-data\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.176084 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-config-data\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.181069 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.181359 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/dc6b3e97-3b88-45f9-9893-160420459404-ceph\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.183686 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-scripts\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.184733 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.184836 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.186363 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07392e5e-2fb2-4582-baf3-94393eed0373-scripts\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.190415 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6b3e97-3b88-45f9-9893-160420459404-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.190643 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42xkx\" (UniqueName: \"kubernetes.io/projected/dc6b3e97-3b88-45f9-9893-160420459404-kube-api-access-42xkx\") pod \"manila-share-share1-0\" (UID: \"dc6b3e97-3b88-45f9-9893-160420459404\") " pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.191006 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5k98\" (UniqueName: \"kubernetes.io/projected/07392e5e-2fb2-4582-baf3-94393eed0373-kube-api-access-j5k98\") pod \"manila-scheduler-0\" (UID: \"07392e5e-2fb2-4582-baf3-94393eed0373\") " pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.261785 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.263557 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-nb\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.263658 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-config\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.263715 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.263816 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pgv\" (UniqueName: \"kubernetes.io/projected/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-kube-api-access-c4pgv\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.263935 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-dns-svc\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.264108 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-sb\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.273692 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.276240 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.287111 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.291004 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.365908 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9b3987-ceb5-4869-bd2b-5892218da671-logs\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.366420 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-scripts\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.366552 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-nb\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.366640 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-config\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.366722 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd9b3987-ceb5-4869-bd2b-5892218da671-etc-machine-id\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.366787 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-config-data-custom\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.366879 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4pgv\" (UniqueName: \"kubernetes.io/projected/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-kube-api-access-c4pgv\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.374110 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.374318 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-config\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.374420 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5pm\" (UniqueName: \"kubernetes.io/projected/fd9b3987-ceb5-4869-bd2b-5892218da671-kube-api-access-vr5pm\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.374963 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-dns-svc\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.375059 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-config-data\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.375150 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-nb\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.376317 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-dns-svc\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.377399 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-sb\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.378233 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-sb\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.433655 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4pgv\" (UniqueName: \"kubernetes.io/projected/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-kube-api-access-c4pgv\") pod \"dnsmasq-dns-849c8dc485-cb7jv\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.496994 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd9b3987-ceb5-4869-bd2b-5892218da671-etc-machine-id\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.497056 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-config-data-custom\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.497204 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.497232 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5pm\" (UniqueName: \"kubernetes.io/projected/fd9b3987-ceb5-4869-bd2b-5892218da671-kube-api-access-vr5pm\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.497305 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-config-data\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.497439 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9b3987-ceb5-4869-bd2b-5892218da671-logs\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.497484 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-scripts\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.498774 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd9b3987-ceb5-4869-bd2b-5892218da671-etc-machine-id\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.500959 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9b3987-ceb5-4869-bd2b-5892218da671-logs\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.511316 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.512250 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-config-data-custom\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.516785 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-config-data\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.520617 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9b3987-ceb5-4869-bd2b-5892218da671-scripts\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.554182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5pm\" (UniqueName: \"kubernetes.io/projected/fd9b3987-ceb5-4869-bd2b-5892218da671-kube-api-access-vr5pm\") pod \"manila-api-0\" (UID: \"fd9b3987-ceb5-4869-bd2b-5892218da671\") " pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.559287 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 23 08:56:49 crc kubenswrapper[5028]: I1123 08:56:49.730487 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.221500 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 23 08:56:50 crc kubenswrapper[5028]: W1123 08:56:50.244372 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07392e5e_2fb2_4582_baf3_94393eed0373.slice/crio-a986db07e05b904ff452cc1323c70878abded7b75aabb2e4c33c3c58e0da923e WatchSource:0}: Error finding container a986db07e05b904ff452cc1323c70878abded7b75aabb2e4c33c3c58e0da923e: Status 404 returned error can't find the container with id a986db07e05b904ff452cc1323c70878abded7b75aabb2e4c33c3c58e0da923e Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.250778 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.605118 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.682998 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849c8dc485-cb7jv"] Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.824605 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"fd9b3987-ceb5-4869-bd2b-5892218da671","Type":"ContainerStarted","Data":"d4945579f0a8f0a3384d533fcd3e667b0a42ae8110b372a50d2b6f7765e809fa"} Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.832337 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"07392e5e-2fb2-4582-baf3-94393eed0373","Type":"ContainerStarted","Data":"a986db07e05b904ff452cc1323c70878abded7b75aabb2e4c33c3c58e0da923e"} Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.837311 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerStarted","Data":"0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf"} Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.844871 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc6b3e97-3b88-45f9-9893-160420459404","Type":"ContainerStarted","Data":"33786f395dcd3bd04e29fb7fe3c7d26037238127a603602374f770040b821e24"} Nov 23 08:56:50 crc kubenswrapper[5028]: I1123 08:56:50.864740 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" event={"ID":"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e","Type":"ContainerStarted","Data":"d50a0f5731a5b020c5e52fd8254c08e82adb1c765d0a525ca0d0b47974a5a418"} Nov 23 08:56:51 crc kubenswrapper[5028]: I1123 08:56:51.598291 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 08:56:51 crc kubenswrapper[5028]: I1123 08:56:51.918297 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"fd9b3987-ceb5-4869-bd2b-5892218da671","Type":"ContainerStarted","Data":"50c3ba1db37973a276e97837c8099ea558b2912295c7277ac06bbb34ae2de369"} Nov 23 08:56:51 crc kubenswrapper[5028]: I1123 08:56:51.950051 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"07392e5e-2fb2-4582-baf3-94393eed0373","Type":"ContainerStarted","Data":"df804a53ddc657c44bcaa9147c61bd763cb60dc3caf4a65f03ecc1cbd97dfdec"} Nov 23 08:56:51 crc kubenswrapper[5028]: I1123 08:56:51.980872 5028 generic.go:334] "Generic (PLEG): container finished" podID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerID="a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b" exitCode=0 Nov 23 08:56:51 crc kubenswrapper[5028]: I1123 08:56:51.982176 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" event={"ID":"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e","Type":"ContainerDied","Data":"a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b"} Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.046652 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"fd9b3987-ceb5-4869-bd2b-5892218da671","Type":"ContainerStarted","Data":"5abee144ad0da462342ff7506d930ca678727722b3e797d3ed2c5c453aaec74c"} Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.048056 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.099938 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"07392e5e-2fb2-4582-baf3-94393eed0373","Type":"ContainerStarted","Data":"eef17f14f2b4afe216566e3c9c33d6224f4e9e30517f194af8ae50f3586a2261"} Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.105501 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" event={"ID":"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e","Type":"ContainerStarted","Data":"9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca"} Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.106233 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.128722 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.128698682 podStartE2EDuration="4.128698682s" podCreationTimestamp="2025-11-23 08:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:53.101116703 +0000 UTC m=+7596.798521482" watchObservedRunningTime="2025-11-23 08:56:53.128698682 +0000 UTC m=+7596.826103471" Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.160213 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.692714044 podStartE2EDuration="5.160184038s" podCreationTimestamp="2025-11-23 08:56:48 +0000 UTC" firstStartedPulling="2025-11-23 08:56:50.253811319 +0000 UTC m=+7593.951216098" lastFinishedPulling="2025-11-23 08:56:50.721281313 +0000 UTC m=+7594.418686092" observedRunningTime="2025-11-23 08:56:53.13470242 +0000 UTC m=+7596.832107199" watchObservedRunningTime="2025-11-23 08:56:53.160184038 +0000 UTC m=+7596.857588817" Nov 23 08:56:53 crc kubenswrapper[5028]: I1123 08:56:53.204906 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" podStartSLOduration=5.204876079 podStartE2EDuration="5.204876079s" podCreationTimestamp="2025-11-23 08:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:56:53.1574476 +0000 UTC m=+7596.854852379" watchObservedRunningTime="2025-11-23 08:56:53.204876079 +0000 UTC m=+7596.902280868" Nov 23 08:56:54 crc kubenswrapper[5028]: I1123 08:56:54.054472 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dfhjk"] Nov 23 08:56:54 crc kubenswrapper[5028]: I1123 08:56:54.063838 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dfhjk"] Nov 23 08:56:55 crc kubenswrapper[5028]: I1123 08:56:55.072959 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e38acb26-f8ac-427f-87bb-2497523de298" path="/var/lib/kubelet/pods/e38acb26-f8ac-427f-87bb-2497523de298/volumes" Nov 23 08:56:55 crc kubenswrapper[5028]: I1123 08:56:55.164898 5028 generic.go:334] "Generic (PLEG): container finished" podID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerID="0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf" exitCode=0 Nov 23 08:56:55 crc kubenswrapper[5028]: I1123 08:56:55.165010 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerDied","Data":"0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf"} Nov 23 08:56:56 crc kubenswrapper[5028]: I1123 08:56:56.183015 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerStarted","Data":"c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5"} Nov 23 08:56:56 crc kubenswrapper[5028]: I1123 08:56:56.222068 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9tk6" podStartSLOduration=3.153442054 podStartE2EDuration="10.222049727s" podCreationTimestamp="2025-11-23 08:56:46 +0000 UTC" firstStartedPulling="2025-11-23 08:56:48.601160021 +0000 UTC m=+7592.298564800" lastFinishedPulling="2025-11-23 08:56:55.669767694 +0000 UTC m=+7599.367172473" observedRunningTime="2025-11-23 08:56:56.219342821 +0000 UTC m=+7599.916747600" watchObservedRunningTime="2025-11-23 08:56:56.222049727 +0000 UTC m=+7599.919454506" Nov 23 08:56:56 crc kubenswrapper[5028]: I1123 08:56:56.848524 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:56 crc kubenswrapper[5028]: I1123 08:56:56.848576 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:56:57 crc kubenswrapper[5028]: I1123 08:56:57.909021 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9tk6" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" probeResult="failure" output=< Nov 23 08:56:57 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 08:56:57 crc kubenswrapper[5028]: > Nov 23 08:56:59 crc kubenswrapper[5028]: I1123 08:56:59.053246 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:56:59 crc kubenswrapper[5028]: E1123 08:56:59.053652 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:56:59 crc kubenswrapper[5028]: I1123 08:56:59.274767 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 23 08:56:59 crc kubenswrapper[5028]: I1123 08:56:59.732097 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:56:59 crc kubenswrapper[5028]: I1123 08:56:59.836768 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb8fb6fc-pqrrq"] Nov 23 08:56:59 crc kubenswrapper[5028]: I1123 08:56:59.837101 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerName="dnsmasq-dns" containerID="cri-o://6ede3c4a3084d0aa498c2a94d7175a16d7f4147b973090909ba21a8389bf8c23" gracePeriod=10 Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.255051 5028 generic.go:334] "Generic (PLEG): container finished" podID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerID="6ede3c4a3084d0aa498c2a94d7175a16d7f4147b973090909ba21a8389bf8c23" exitCode=0 Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.255538 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" event={"ID":"5b8d2873-13b3-410e-8597-342fc58a49fb","Type":"ContainerDied","Data":"6ede3c4a3084d0aa498c2a94d7175a16d7f4147b973090909ba21a8389bf8c23"} Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.394095 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.556295 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-nb\") pod \"5b8d2873-13b3-410e-8597-342fc58a49fb\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.556444 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-dns-svc\") pod \"5b8d2873-13b3-410e-8597-342fc58a49fb\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.556538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-config\") pod \"5b8d2873-13b3-410e-8597-342fc58a49fb\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.557710 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-sb\") pod \"5b8d2873-13b3-410e-8597-342fc58a49fb\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.557864 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwjkf\" (UniqueName: \"kubernetes.io/projected/5b8d2873-13b3-410e-8597-342fc58a49fb-kube-api-access-lwjkf\") pod \"5b8d2873-13b3-410e-8597-342fc58a49fb\" (UID: \"5b8d2873-13b3-410e-8597-342fc58a49fb\") " Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.567555 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8d2873-13b3-410e-8597-342fc58a49fb-kube-api-access-lwjkf" (OuterVolumeSpecName: "kube-api-access-lwjkf") pod "5b8d2873-13b3-410e-8597-342fc58a49fb" (UID: "5b8d2873-13b3-410e-8597-342fc58a49fb"). InnerVolumeSpecName "kube-api-access-lwjkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.619574 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b8d2873-13b3-410e-8597-342fc58a49fb" (UID: "5b8d2873-13b3-410e-8597-342fc58a49fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.619597 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b8d2873-13b3-410e-8597-342fc58a49fb" (UID: "5b8d2873-13b3-410e-8597-342fc58a49fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.631478 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-config" (OuterVolumeSpecName: "config") pod "5b8d2873-13b3-410e-8597-342fc58a49fb" (UID: "5b8d2873-13b3-410e-8597-342fc58a49fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.633604 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b8d2873-13b3-410e-8597-342fc58a49fb" (UID: "5b8d2873-13b3-410e-8597-342fc58a49fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.662974 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.663003 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.663012 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.663020 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8d2873-13b3-410e-8597-342fc58a49fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:00 crc kubenswrapper[5028]: I1123 08:57:00.663029 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwjkf\" (UniqueName: \"kubernetes.io/projected/5b8d2873-13b3-410e-8597-342fc58a49fb-kube-api-access-lwjkf\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.272096 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" event={"ID":"5b8d2873-13b3-410e-8597-342fc58a49fb","Type":"ContainerDied","Data":"dea60cc11281a657a0dc0a34bfaf044f7dc4e4ad8295954a638944a8478e374e"} Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.272164 5028 scope.go:117] "RemoveContainer" containerID="6ede3c4a3084d0aa498c2a94d7175a16d7f4147b973090909ba21a8389bf8c23" Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.272306 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb8fb6fc-pqrrq" Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.278279 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc6b3e97-3b88-45f9-9893-160420459404","Type":"ContainerStarted","Data":"859af2b651c5bdce7147ef1bd3797847aac3fba0ac59a931e9059e41c89eacac"} Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.278322 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"dc6b3e97-3b88-45f9-9893-160420459404","Type":"ContainerStarted","Data":"982dab76191821a20d28ebb3ecad4c083874c28300f91f4b393a2f96ecb9106f"} Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.304539 5028 scope.go:117] "RemoveContainer" containerID="b35550993ec1096ab0b050a2bd0ad4f719924fd54164f6eb3277cf6da42b0814" Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.308577 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb8fb6fc-pqrrq"] Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.319353 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb8fb6fc-pqrrq"] Nov 23 08:57:01 crc kubenswrapper[5028]: I1123 08:57:01.327177 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.902611111 podStartE2EDuration="13.327152556s" podCreationTimestamp="2025-11-23 08:56:48 +0000 UTC" firstStartedPulling="2025-11-23 08:56:50.204388961 +0000 UTC m=+7593.901793740" lastFinishedPulling="2025-11-23 08:56:59.628930406 +0000 UTC m=+7603.326335185" observedRunningTime="2025-11-23 08:57:01.324122992 +0000 UTC m=+7605.021527761" watchObservedRunningTime="2025-11-23 08:57:01.327152556 +0000 UTC m=+7605.024557335" Nov 23 08:57:02 crc kubenswrapper[5028]: I1123 08:57:02.428796 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:57:02 crc kubenswrapper[5028]: I1123 08:57:02.429480 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-central-agent" containerID="cri-o://a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d" gracePeriod=30 Nov 23 08:57:02 crc kubenswrapper[5028]: I1123 08:57:02.429579 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="proxy-httpd" containerID="cri-o://7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75" gracePeriod=30 Nov 23 08:57:02 crc kubenswrapper[5028]: I1123 08:57:02.429645 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-notification-agent" containerID="cri-o://28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb" gracePeriod=30 Nov 23 08:57:02 crc kubenswrapper[5028]: I1123 08:57:02.429654 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="sg-core" containerID="cri-o://bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59" gracePeriod=30 Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.068813 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" path="/var/lib/kubelet/pods/5b8d2873-13b3-410e-8597-342fc58a49fb/volumes" Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.317494 5028 generic.go:334] "Generic (PLEG): container finished" podID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerID="7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75" exitCode=0 Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.317534 5028 generic.go:334] "Generic (PLEG): container finished" podID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerID="bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59" exitCode=2 Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.317543 5028 generic.go:334] "Generic (PLEG): container finished" podID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerID="a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d" exitCode=0 Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.317566 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerDied","Data":"7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75"} Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.317599 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerDied","Data":"bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59"} Nov 23 08:57:03 crc kubenswrapper[5028]: I1123 08:57:03.317611 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerDied","Data":"a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d"} Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.824325 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.933851 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-combined-ca-bundle\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.933996 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-scripts\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.934080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-config-data\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.934140 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-sg-core-conf-yaml\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.934416 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-run-httpd\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.934503 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-log-httpd\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.934587 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/2ecc121e-d13d-469a-9e62-083e2ec8ad96-kube-api-access-jcf74\") pod \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\" (UID: \"2ecc121e-d13d-469a-9e62-083e2ec8ad96\") " Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.936528 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.937841 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.955293 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ecc121e-d13d-469a-9e62-083e2ec8ad96-kube-api-access-jcf74" (OuterVolumeSpecName: "kube-api-access-jcf74") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "kube-api-access-jcf74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.964993 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-scripts" (OuterVolumeSpecName: "scripts") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:57:06 crc kubenswrapper[5028]: I1123 08:57:06.969378 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.039404 5028 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.039448 5028 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.039459 5028 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ecc121e-d13d-469a-9e62-083e2ec8ad96-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.039474 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/2ecc121e-d13d-469a-9e62-083e2ec8ad96-kube-api-access-jcf74\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.039486 5028 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-scripts\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.055650 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.061466 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-config-data" (OuterVolumeSpecName: "config-data") pod "2ecc121e-d13d-469a-9e62-083e2ec8ad96" (UID: "2ecc121e-d13d-469a-9e62-083e2ec8ad96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.145219 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.145285 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ecc121e-d13d-469a-9e62-083e2ec8ad96-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.363555 5028 generic.go:334] "Generic (PLEG): container finished" podID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerID="28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb" exitCode=0 Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.363670 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.363661 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerDied","Data":"28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb"} Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.364206 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ecc121e-d13d-469a-9e62-083e2ec8ad96","Type":"ContainerDied","Data":"657a1277d6a79d6d25059504125298d6a91733227ad7f9e2a7dd5f0a0808913a"} Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.364245 5028 scope.go:117] "RemoveContainer" containerID="7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.399687 5028 scope.go:117] "RemoveContainer" containerID="bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.401147 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.410825 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.440677 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.441272 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="proxy-httpd" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441292 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="proxy-httpd" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.441314 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-notification-agent" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441325 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-notification-agent" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.441351 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerName="dnsmasq-dns" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441358 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerName="dnsmasq-dns" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.441380 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerName="init" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441387 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerName="init" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.441398 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="sg-core" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441405 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="sg-core" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.441437 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-central-agent" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441445 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-central-agent" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441667 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8d2873-13b3-410e-8597-342fc58a49fb" containerName="dnsmasq-dns" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441692 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="sg-core" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441702 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-central-agent" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441733 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="proxy-httpd" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.441745 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" containerName="ceilometer-notification-agent" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.444353 5028 scope.go:117] "RemoveContainer" containerID="28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.444921 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.452460 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.452700 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.466526 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.476535 5028 scope.go:117] "RemoveContainer" containerID="a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.557802 5028 scope.go:117] "RemoveContainer" containerID="7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.558113 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq48t\" (UniqueName: \"kubernetes.io/projected/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-kube-api-access-lq48t\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.558178 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.558217 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-run-httpd\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.558486 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.559138 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75\": container with ID starting with 7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75 not found: ID does not exist" containerID="7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.559197 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75"} err="failed to get container status \"7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75\": rpc error: code = NotFound desc = could not find container \"7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75\": container with ID starting with 7fe96e9d38c63425ce65fdcdd9aed2dc2850b2b9d7e2f8616d5e70d19690ce75 not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.559209 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-config-data\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.559246 5028 scope.go:117] "RemoveContainer" containerID="bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.559631 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-scripts\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.559859 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-log-httpd\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.560025 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59\": container with ID starting with bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59 not found: ID does not exist" containerID="bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.560056 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59"} err="failed to get container status \"bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59\": rpc error: code = NotFound desc = could not find container \"bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59\": container with ID starting with bfc72384a807cb32ff8f9d7114ea88008b21fcb1516c0b4245d7dc6f062b0a59 not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.560079 5028 scope.go:117] "RemoveContainer" containerID="28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.560762 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb\": container with ID starting with 28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb not found: ID does not exist" containerID="28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.560789 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb"} err="failed to get container status \"28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb\": rpc error: code = NotFound desc = could not find container \"28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb\": container with ID starting with 28c7e6d267bddeaeba295b2e7ecf1cbb989e12086ecdb7bfc8e24fe6c7ad69fb not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.560803 5028 scope.go:117] "RemoveContainer" containerID="a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d" Nov 23 08:57:07 crc kubenswrapper[5028]: E1123 08:57:07.561591 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d\": container with ID starting with a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d not found: ID does not exist" containerID="a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.561654 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d"} err="failed to get container status \"a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d\": rpc error: code = NotFound desc = could not find container \"a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d\": container with ID starting with a044ff54d4ff6d7f72e39cd57d7dde19b0f624ff58e315aa5f0a58188c4b1b4d not found: ID does not exist" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.662640 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-scripts\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.662749 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-log-httpd\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.662831 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq48t\" (UniqueName: \"kubernetes.io/projected/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-kube-api-access-lq48t\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.662866 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.662901 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-run-httpd\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.662995 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.663137 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-config-data\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.663682 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-run-httpd\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.663750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-log-httpd\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.670725 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-config-data\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.678005 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.679604 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-scripts\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.679696 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.681587 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq48t\" (UniqueName: \"kubernetes.io/projected/ecb410a7-3a3a-433a-a7a7-a3120c5e433a-kube-api-access-lq48t\") pod \"ceilometer-0\" (UID: \"ecb410a7-3a3a-433a-a7a7-a3120c5e433a\") " pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.774024 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 23 08:57:07 crc kubenswrapper[5028]: I1123 08:57:07.923031 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9tk6" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" probeResult="failure" output=< Nov 23 08:57:07 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 08:57:07 crc kubenswrapper[5028]: > Nov 23 08:57:08 crc kubenswrapper[5028]: I1123 08:57:08.291169 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 23 08:57:08 crc kubenswrapper[5028]: W1123 08:57:08.302293 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecb410a7_3a3a_433a_a7a7_a3120c5e433a.slice/crio-ba1c58a89b173ed2c080157431ef274c87c836ace387d9a8dbf1af191e5ab638 WatchSource:0}: Error finding container ba1c58a89b173ed2c080157431ef274c87c836ace387d9a8dbf1af191e5ab638: Status 404 returned error can't find the container with id ba1c58a89b173ed2c080157431ef274c87c836ace387d9a8dbf1af191e5ab638 Nov 23 08:57:08 crc kubenswrapper[5028]: I1123 08:57:08.376961 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb410a7-3a3a-433a-a7a7-a3120c5e433a","Type":"ContainerStarted","Data":"ba1c58a89b173ed2c080157431ef274c87c836ace387d9a8dbf1af191e5ab638"} Nov 23 08:57:09 crc kubenswrapper[5028]: I1123 08:57:09.079151 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ecc121e-d13d-469a-9e62-083e2ec8ad96" path="/var/lib/kubelet/pods/2ecc121e-d13d-469a-9e62-083e2ec8ad96/volumes" Nov 23 08:57:09 crc kubenswrapper[5028]: I1123 08:57:09.262573 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 23 08:57:09 crc kubenswrapper[5028]: I1123 08:57:09.388816 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb410a7-3a3a-433a-a7a7-a3120c5e433a","Type":"ContainerStarted","Data":"d2054ea9d7e8b2f857d8786bd5eabd120351418f81899f423b700dd2626cecec"} Nov 23 08:57:09 crc kubenswrapper[5028]: I1123 08:57:09.388863 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb410a7-3a3a-433a-a7a7-a3120c5e433a","Type":"ContainerStarted","Data":"14dc48a49bb04afa21d0d73c6940003f4bd3165e96dd0c7f60494caf6201a5f7"} Nov 23 08:57:10 crc kubenswrapper[5028]: I1123 08:57:10.407659 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb410a7-3a3a-433a-a7a7-a3120c5e433a","Type":"ContainerStarted","Data":"9a29b71f54deb9114166dce2e14b09bd42d8d3de3d11ccc3ff473bd050b0763e"} Nov 23 08:57:11 crc kubenswrapper[5028]: I1123 08:57:11.029631 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 23 08:57:11 crc kubenswrapper[5028]: I1123 08:57:11.060210 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:57:11 crc kubenswrapper[5028]: E1123 08:57:11.060475 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:57:11 crc kubenswrapper[5028]: I1123 08:57:11.166905 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 23 08:57:12 crc kubenswrapper[5028]: I1123 08:57:12.435745 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecb410a7-3a3a-433a-a7a7-a3120c5e433a","Type":"ContainerStarted","Data":"6f20192a8e55e831e12c0f4082ec1df6bfd817cffd037a0a3f9457f96a8f492b"} Nov 23 08:57:12 crc kubenswrapper[5028]: I1123 08:57:12.436147 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 23 08:57:12 crc kubenswrapper[5028]: I1123 08:57:12.467623 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.113522229 podStartE2EDuration="5.467596967s" podCreationTimestamp="2025-11-23 08:57:07 +0000 UTC" firstStartedPulling="2025-11-23 08:57:08.305138557 +0000 UTC m=+7612.002543336" lastFinishedPulling="2025-11-23 08:57:11.659213295 +0000 UTC m=+7615.356618074" observedRunningTime="2025-11-23 08:57:12.462659335 +0000 UTC m=+7616.160064164" watchObservedRunningTime="2025-11-23 08:57:12.467596967 +0000 UTC m=+7616.165001746" Nov 23 08:57:15 crc kubenswrapper[5028]: I1123 08:57:15.072974 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vmc6m"] Nov 23 08:57:15 crc kubenswrapper[5028]: I1123 08:57:15.073745 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-72g9n"] Nov 23 08:57:15 crc kubenswrapper[5028]: I1123 08:57:15.080802 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vmc6m"] Nov 23 08:57:15 crc kubenswrapper[5028]: I1123 08:57:15.091829 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-72g9n"] Nov 23 08:57:17 crc kubenswrapper[5028]: I1123 08:57:17.073690 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d789bd-7151-47af-b379-89152fb07d3d" path="/var/lib/kubelet/pods/04d789bd-7151-47af-b379-89152fb07d3d/volumes" Nov 23 08:57:17 crc kubenswrapper[5028]: I1123 08:57:17.074589 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956c5566-a9a2-4f35-a210-39618d9c332d" path="/var/lib/kubelet/pods/956c5566-a9a2-4f35-a210-39618d9c332d/volumes" Nov 23 08:57:17 crc kubenswrapper[5028]: I1123 08:57:17.907271 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9tk6" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" probeResult="failure" output=< Nov 23 08:57:17 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 08:57:17 crc kubenswrapper[5028]: > Nov 23 08:57:21 crc kubenswrapper[5028]: I1123 08:57:21.014265 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 23 08:57:23 crc kubenswrapper[5028]: I1123 08:57:23.054846 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:57:23 crc kubenswrapper[5028]: E1123 08:57:23.055639 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:57:26 crc kubenswrapper[5028]: I1123 08:57:26.926762 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:57:26 crc kubenswrapper[5028]: I1123 08:57:26.997978 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:57:27 crc kubenswrapper[5028]: I1123 08:57:27.179581 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9tk6"] Nov 23 08:57:28 crc kubenswrapper[5028]: I1123 08:57:28.641654 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9tk6" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" containerID="cri-o://c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5" gracePeriod=2 Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.204282 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.385486 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-utilities\") pod \"0db42cee-55c3-4bb2-8f43-9374ba23393b\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.385773 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-catalog-content\") pod \"0db42cee-55c3-4bb2-8f43-9374ba23393b\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.385916 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp6tw\" (UniqueName: \"kubernetes.io/projected/0db42cee-55c3-4bb2-8f43-9374ba23393b-kube-api-access-fp6tw\") pod \"0db42cee-55c3-4bb2-8f43-9374ba23393b\" (UID: \"0db42cee-55c3-4bb2-8f43-9374ba23393b\") " Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.388161 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-utilities" (OuterVolumeSpecName: "utilities") pod "0db42cee-55c3-4bb2-8f43-9374ba23393b" (UID: "0db42cee-55c3-4bb2-8f43-9374ba23393b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.415368 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db42cee-55c3-4bb2-8f43-9374ba23393b-kube-api-access-fp6tw" (OuterVolumeSpecName: "kube-api-access-fp6tw") pod "0db42cee-55c3-4bb2-8f43-9374ba23393b" (UID: "0db42cee-55c3-4bb2-8f43-9374ba23393b"). InnerVolumeSpecName "kube-api-access-fp6tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.489851 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp6tw\" (UniqueName: \"kubernetes.io/projected/0db42cee-55c3-4bb2-8f43-9374ba23393b-kube-api-access-fp6tw\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.489889 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.494046 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0db42cee-55c3-4bb2-8f43-9374ba23393b" (UID: "0db42cee-55c3-4bb2-8f43-9374ba23393b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.592007 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db42cee-55c3-4bb2-8f43-9374ba23393b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.654271 5028 generic.go:334] "Generic (PLEG): container finished" podID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerID="c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5" exitCode=0 Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.654329 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerDied","Data":"c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5"} Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.654369 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9tk6" event={"ID":"0db42cee-55c3-4bb2-8f43-9374ba23393b","Type":"ContainerDied","Data":"1904fd007d67eedd016059e0c9585621a073551013ef15654634c26391bc0b5a"} Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.654394 5028 scope.go:117] "RemoveContainer" containerID="c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.654393 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9tk6" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.695088 5028 scope.go:117] "RemoveContainer" containerID="0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.703992 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9tk6"] Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.712661 5028 scope.go:117] "RemoveContainer" containerID="c8501241c2999a4baf6d895e8e113d61704fd723bf0e2a3744d1f08a64768230" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.721413 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9tk6"] Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.726176 5028 scope.go:117] "RemoveContainer" containerID="6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.790715 5028 scope.go:117] "RemoveContainer" containerID="a2a3e1e31143fc028d6d0792226bfcc76eb5aff8fc1ec0107dd0103a15ada710" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.808238 5028 scope.go:117] "RemoveContainer" containerID="c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5" Nov 23 08:57:29 crc kubenswrapper[5028]: E1123 08:57:29.808818 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5\": container with ID starting with c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5 not found: ID does not exist" containerID="c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.808861 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5"} err="failed to get container status \"c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5\": rpc error: code = NotFound desc = could not find container \"c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5\": container with ID starting with c0d206942728708c1a9284f65d76c2e364e538582a98172e987d4eb1cfd4b0a5 not found: ID does not exist" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.808887 5028 scope.go:117] "RemoveContainer" containerID="0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf" Nov 23 08:57:29 crc kubenswrapper[5028]: E1123 08:57:29.809362 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf\": container with ID starting with 0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf not found: ID does not exist" containerID="0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.809401 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf"} err="failed to get container status \"0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf\": rpc error: code = NotFound desc = could not find container \"0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf\": container with ID starting with 0daf46e834e553b9bbc5711bdf29cc88130077889939139214fcfc9e9ccee2cf not found: ID does not exist" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.809424 5028 scope.go:117] "RemoveContainer" containerID="6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8" Nov 23 08:57:29 crc kubenswrapper[5028]: E1123 08:57:29.809734 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8\": container with ID starting with 6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8 not found: ID does not exist" containerID="6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.809757 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8"} err="failed to get container status \"6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8\": rpc error: code = NotFound desc = could not find container \"6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8\": container with ID starting with 6c17037e4001d93e6b22599b5a96f4fde6790576b3e3d00c7c9d6632214229a8 not found: ID does not exist" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.832790 5028 scope.go:117] "RemoveContainer" containerID="4d4c04555c3dc7ff80a818f6dbdd0320e99df63b483544d45d14a2ac017ea56a" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.961729 5028 scope.go:117] "RemoveContainer" containerID="850d87ba0879040d37d24d57bd385a6068b4d52e26e78663b1b1dd31bfdd45a8" Nov 23 08:57:29 crc kubenswrapper[5028]: I1123 08:57:29.990989 5028 scope.go:117] "RemoveContainer" containerID="13e702adb507ae7ec828300737f9c0c487efdae5c8c9e83def21cbfba272f154" Nov 23 08:57:30 crc kubenswrapper[5028]: I1123 08:57:30.080646 5028 scope.go:117] "RemoveContainer" containerID="c966cf9a6d1cfa2ba571f9916b2abed30f1a7c56550696fee9cce5c422849aea" Nov 23 08:57:30 crc kubenswrapper[5028]: I1123 08:57:30.110716 5028 scope.go:117] "RemoveContainer" containerID="18ca84e14b8b10ed175cb21ca70cfbab33e88f34a813f3b2898e853ed8e54b67" Nov 23 08:57:30 crc kubenswrapper[5028]: I1123 08:57:30.158380 5028 scope.go:117] "RemoveContainer" containerID="d30643b6f60afd88c8e406d72d49b33df3d8deae8ccc5b390d22ca815f57e3bf" Nov 23 08:57:30 crc kubenswrapper[5028]: I1123 08:57:30.186890 5028 scope.go:117] "RemoveContainer" containerID="2da34d072f784460313c472ebcbe337286c27ed36bec29fd65edeb3a68f49214" Nov 23 08:57:31 crc kubenswrapper[5028]: I1123 08:57:31.069268 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" path="/var/lib/kubelet/pods/0db42cee-55c3-4bb2-8f43-9374ba23393b/volumes" Nov 23 08:57:34 crc kubenswrapper[5028]: I1123 08:57:34.047171 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jc7mh"] Nov 23 08:57:34 crc kubenswrapper[5028]: I1123 08:57:34.053580 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:57:34 crc kubenswrapper[5028]: E1123 08:57:34.054161 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:57:34 crc kubenswrapper[5028]: I1123 08:57:34.060850 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jc7mh"] Nov 23 08:57:35 crc kubenswrapper[5028]: I1123 08:57:35.076648 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0714cee8-f557-480e-b57a-badede4d39c5" path="/var/lib/kubelet/pods/0714cee8-f557-480e-b57a-badede4d39c5/volumes" Nov 23 08:57:38 crc kubenswrapper[5028]: I1123 08:57:38.053207 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 23 08:57:47 crc kubenswrapper[5028]: I1123 08:57:47.063308 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:57:47 crc kubenswrapper[5028]: E1123 08:57:47.064226 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:58:00 crc kubenswrapper[5028]: I1123 08:58:00.055016 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:58:00 crc kubenswrapper[5028]: E1123 08:58:00.056307 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.819879 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fddfd7647-98tcm"] Nov 23 08:58:02 crc kubenswrapper[5028]: E1123 08:58:02.821211 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.821232 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" Nov 23 08:58:02 crc kubenswrapper[5028]: E1123 08:58:02.821251 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="extract-content" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.821260 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="extract-content" Nov 23 08:58:02 crc kubenswrapper[5028]: E1123 08:58:02.821292 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="extract-utilities" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.821300 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="extract-utilities" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.821553 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db42cee-55c3-4bb2-8f43-9374ba23393b" containerName="registry-server" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.822811 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.827443 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.861573 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fddfd7647-98tcm"] Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.873914 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wflpp\" (UniqueName: \"kubernetes.io/projected/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-kube-api-access-wflpp\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.874055 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-config\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.874126 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-openstack-cell1\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.874193 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-sb\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.874225 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-dns-svc\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.874291 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-nb\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.969284 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fddfd7647-98tcm"] Nov 23 08:58:02 crc kubenswrapper[5028]: E1123 08:58:02.970417 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-wflpp openstack-cell1 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" podUID="7edb4239-0fb4-456f-a6fc-aacb3a824dd7" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.976427 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-openstack-cell1\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.976509 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-sb\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.976535 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-dns-svc\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.976561 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-nb\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.976674 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wflpp\" (UniqueName: \"kubernetes.io/projected/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-kube-api-access-wflpp\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.976732 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-config\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.977759 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-openstack-cell1\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.977784 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-dns-svc\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.979149 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-config\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.979478 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-nb\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:02 crc kubenswrapper[5028]: I1123 08:58:02.979558 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-sb\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.011237 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b96c5b56f-shg2s"] Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.011741 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wflpp\" (UniqueName: \"kubernetes.io/projected/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-kube-api-access-wflpp\") pod \"dnsmasq-dns-5fddfd7647-98tcm\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.013472 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.017965 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-networker" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.034505 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b96c5b56f-shg2s"] Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079246 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5d7\" (UniqueName: \"kubernetes.io/projected/5a2802d6-5d73-4926-9011-150d8a757a17-kube-api-access-th5d7\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079290 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-networker\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079336 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-config\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079407 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-dns-svc\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079440 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-cell1\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079512 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.079597 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181607 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181774 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5d7\" (UniqueName: \"kubernetes.io/projected/5a2802d6-5d73-4926-9011-150d8a757a17-kube-api-access-th5d7\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181832 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-networker\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181881 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-config\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181931 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-dns-svc\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.181973 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-cell1\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.182750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-sb\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.183845 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-nb\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.184160 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-networker\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.208265 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-dns-svc\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.209910 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5d7\" (UniqueName: \"kubernetes.io/projected/5a2802d6-5d73-4926-9011-150d8a757a17-kube-api-access-th5d7\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.210645 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-config\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.210690 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-cell1\") pod \"dnsmasq-dns-5b96c5b56f-shg2s\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.354368 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.371069 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.385199 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wflpp\" (UniqueName: \"kubernetes.io/projected/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-kube-api-access-wflpp\") pod \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.385552 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-openstack-cell1\") pod \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.385756 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-config\") pod \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.385943 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-dns-svc\") pod \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386131 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "7edb4239-0fb4-456f-a6fc-aacb3a824dd7" (UID: "7edb4239-0fb4-456f-a6fc-aacb3a824dd7"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386174 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-config" (OuterVolumeSpecName: "config") pod "7edb4239-0fb4-456f-a6fc-aacb3a824dd7" (UID: "7edb4239-0fb4-456f-a6fc-aacb3a824dd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386323 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-sb\") pod \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386447 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-nb\") pod \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\" (UID: \"7edb4239-0fb4-456f-a6fc-aacb3a824dd7\") " Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386624 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7edb4239-0fb4-456f-a6fc-aacb3a824dd7" (UID: "7edb4239-0fb4-456f-a6fc-aacb3a824dd7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386654 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7edb4239-0fb4-456f-a6fc-aacb3a824dd7" (UID: "7edb4239-0fb4-456f-a6fc-aacb3a824dd7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.386925 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7edb4239-0fb4-456f-a6fc-aacb3a824dd7" (UID: "7edb4239-0fb4-456f-a6fc-aacb3a824dd7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.387860 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.387969 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.388036 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.388155 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.388228 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.395049 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-kube-api-access-wflpp" (OuterVolumeSpecName: "kube-api-access-wflpp") pod "7edb4239-0fb4-456f-a6fc-aacb3a824dd7" (UID: "7edb4239-0fb4-456f-a6fc-aacb3a824dd7"). InnerVolumeSpecName "kube-api-access-wflpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.411735 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:03 crc kubenswrapper[5028]: I1123 08:58:03.491780 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wflpp\" (UniqueName: \"kubernetes.io/projected/7edb4239-0fb4-456f-a6fc-aacb3a824dd7-kube-api-access-wflpp\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:04 crc kubenswrapper[5028]: I1123 08:58:04.017682 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b96c5b56f-shg2s"] Nov 23 08:58:04 crc kubenswrapper[5028]: I1123 08:58:04.371902 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fddfd7647-98tcm" Nov 23 08:58:04 crc kubenswrapper[5028]: I1123 08:58:04.371910 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" event={"ID":"5a2802d6-5d73-4926-9011-150d8a757a17","Type":"ContainerStarted","Data":"25540fa983131169099384061565c8c404a90ddb676cae830ff9e5a558af90be"} Nov 23 08:58:04 crc kubenswrapper[5028]: I1123 08:58:04.372548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" event={"ID":"5a2802d6-5d73-4926-9011-150d8a757a17","Type":"ContainerStarted","Data":"90ff6bf831f022b82a49944f6a92517d675131b3bc5f465e426c4c12161ca5ef"} Nov 23 08:58:04 crc kubenswrapper[5028]: I1123 08:58:04.496763 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fddfd7647-98tcm"] Nov 23 08:58:04 crc kubenswrapper[5028]: I1123 08:58:04.518027 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fddfd7647-98tcm"] Nov 23 08:58:05 crc kubenswrapper[5028]: I1123 08:58:05.075327 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7edb4239-0fb4-456f-a6fc-aacb3a824dd7" path="/var/lib/kubelet/pods/7edb4239-0fb4-456f-a6fc-aacb3a824dd7/volumes" Nov 23 08:58:05 crc kubenswrapper[5028]: I1123 08:58:05.395731 5028 generic.go:334] "Generic (PLEG): container finished" podID="5a2802d6-5d73-4926-9011-150d8a757a17" containerID="25540fa983131169099384061565c8c404a90ddb676cae830ff9e5a558af90be" exitCode=0 Nov 23 08:58:05 crc kubenswrapper[5028]: I1123 08:58:05.395840 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" event={"ID":"5a2802d6-5d73-4926-9011-150d8a757a17","Type":"ContainerDied","Data":"25540fa983131169099384061565c8c404a90ddb676cae830ff9e5a558af90be"} Nov 23 08:58:06 crc kubenswrapper[5028]: I1123 08:58:06.406165 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" event={"ID":"5a2802d6-5d73-4926-9011-150d8a757a17","Type":"ContainerStarted","Data":"ce26072283d1a635f5388af7c8bb2e111d6268f579c8bcc9302d323edf1582cc"} Nov 23 08:58:06 crc kubenswrapper[5028]: I1123 08:58:06.406988 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:06 crc kubenswrapper[5028]: I1123 08:58:06.428093 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" podStartSLOduration=4.428072655 podStartE2EDuration="4.428072655s" podCreationTimestamp="2025-11-23 08:58:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:58:06.426898706 +0000 UTC m=+7670.124303495" watchObservedRunningTime="2025-11-23 08:58:06.428072655 +0000 UTC m=+7670.125477434" Nov 23 08:58:11 crc kubenswrapper[5028]: I1123 08:58:11.054441 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:58:11 crc kubenswrapper[5028]: E1123 08:58:11.055606 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.413218 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.530273 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849c8dc485-cb7jv"] Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.530733 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerName="dnsmasq-dns" containerID="cri-o://9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca" gracePeriod=10 Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.771184 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65597c4885-rtdk8"] Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.774305 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.787702 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65597c4885-rtdk8"] Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.881475 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-config\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.882069 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr6vq\" (UniqueName: \"kubernetes.io/projected/c93cab81-1ff9-41ba-b339-8c0ab46d6737-kube-api-access-wr6vq\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.882145 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-dns-svc\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.882297 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-nb\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.882328 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-cell1\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.882378 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-sb\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.882413 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-networker\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.984701 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr6vq\" (UniqueName: \"kubernetes.io/projected/c93cab81-1ff9-41ba-b339-8c0ab46d6737-kube-api-access-wr6vq\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.984822 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-dns-svc\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.985631 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-nb\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.985679 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-cell1\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.985762 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-sb\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.985807 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-networker\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.985845 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-config\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.986417 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-dns-svc\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.988773 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-cell1\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.989474 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-networker\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.989573 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-config\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.990154 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-nb\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:13 crc kubenswrapper[5028]: I1123 08:58:13.992939 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-sb\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.013202 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr6vq\" (UniqueName: \"kubernetes.io/projected/c93cab81-1ff9-41ba-b339-8c0ab46d6737-kube-api-access-wr6vq\") pod \"dnsmasq-dns-65597c4885-rtdk8\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.146009 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.251210 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.294881 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-dns-svc\") pod \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.294981 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-nb\") pod \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.295090 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-config\") pod \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.295170 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4pgv\" (UniqueName: \"kubernetes.io/projected/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-kube-api-access-c4pgv\") pod \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.295463 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-sb\") pod \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\" (UID: \"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e\") " Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.302173 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-kube-api-access-c4pgv" (OuterVolumeSpecName: "kube-api-access-c4pgv") pod "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" (UID: "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e"). InnerVolumeSpecName "kube-api-access-c4pgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.364372 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" (UID: "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.391621 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" (UID: "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.401316 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.401639 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4pgv\" (UniqueName: \"kubernetes.io/projected/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-kube-api-access-c4pgv\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.401721 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.407843 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" (UID: "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.419192 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-config" (OuterVolumeSpecName: "config") pod "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" (UID: "c7ba1b5b-aa76-49c0-a53b-23688d9bed3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.504100 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.504137 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.525752 5028 generic.go:334] "Generic (PLEG): container finished" podID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerID="9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca" exitCode=0 Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.525803 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" event={"ID":"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e","Type":"ContainerDied","Data":"9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca"} Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.525844 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" event={"ID":"c7ba1b5b-aa76-49c0-a53b-23688d9bed3e","Type":"ContainerDied","Data":"d50a0f5731a5b020c5e52fd8254c08e82adb1c765d0a525ca0d0b47974a5a418"} Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.525866 5028 scope.go:117] "RemoveContainer" containerID="9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.525915 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849c8dc485-cb7jv" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.552071 5028 scope.go:117] "RemoveContainer" containerID="a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.573680 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849c8dc485-cb7jv"] Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.586769 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-849c8dc485-cb7jv"] Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.600425 5028 scope.go:117] "RemoveContainer" containerID="9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca" Nov 23 08:58:14 crc kubenswrapper[5028]: E1123 08:58:14.601157 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca\": container with ID starting with 9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca not found: ID does not exist" containerID="9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.601227 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca"} err="failed to get container status \"9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca\": rpc error: code = NotFound desc = could not find container \"9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca\": container with ID starting with 9c923b1386cfe958ea2f81c36321215cfbc7d690808d5c4bb6edbbd6869d96ca not found: ID does not exist" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.601273 5028 scope.go:117] "RemoveContainer" containerID="a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b" Nov 23 08:58:14 crc kubenswrapper[5028]: E1123 08:58:14.601760 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b\": container with ID starting with a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b not found: ID does not exist" containerID="a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.601797 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b"} err="failed to get container status \"a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b\": rpc error: code = NotFound desc = could not find container \"a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b\": container with ID starting with a9d37968981e2138541e3b80c6248f50d26ce89a47280dd841a6740e515c935b not found: ID does not exist" Nov 23 08:58:14 crc kubenswrapper[5028]: I1123 08:58:14.701307 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65597c4885-rtdk8"] Nov 23 08:58:15 crc kubenswrapper[5028]: I1123 08:58:15.071039 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" path="/var/lib/kubelet/pods/c7ba1b5b-aa76-49c0-a53b-23688d9bed3e/volumes" Nov 23 08:58:15 crc kubenswrapper[5028]: I1123 08:58:15.538609 5028 generic.go:334] "Generic (PLEG): container finished" podID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerID="763367f9fdff033b8773f65ff02dc7dd3d17856c668dc5fd5b4d5177a4730e91" exitCode=0 Nov 23 08:58:15 crc kubenswrapper[5028]: I1123 08:58:15.538734 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" event={"ID":"c93cab81-1ff9-41ba-b339-8c0ab46d6737","Type":"ContainerDied","Data":"763367f9fdff033b8773f65ff02dc7dd3d17856c668dc5fd5b4d5177a4730e91"} Nov 23 08:58:15 crc kubenswrapper[5028]: I1123 08:58:15.539159 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" event={"ID":"c93cab81-1ff9-41ba-b339-8c0ab46d6737","Type":"ContainerStarted","Data":"42c45a6ee994beaba04dd80f8cd9633f99897c2d2cbd1c99fba5754c55a2a13d"} Nov 23 08:58:16 crc kubenswrapper[5028]: I1123 08:58:16.047735 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fvxjt"] Nov 23 08:58:16 crc kubenswrapper[5028]: I1123 08:58:16.058073 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fvxjt"] Nov 23 08:58:16 crc kubenswrapper[5028]: I1123 08:58:16.563469 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" event={"ID":"c93cab81-1ff9-41ba-b339-8c0ab46d6737","Type":"ContainerStarted","Data":"a521665591836c6a56e9cbd56fa0a915ce4802474dda740eb9eed0659e8dad9a"} Nov 23 08:58:16 crc kubenswrapper[5028]: I1123 08:58:16.563610 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:16 crc kubenswrapper[5028]: I1123 08:58:16.594976 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" podStartSLOduration=3.594916163 podStartE2EDuration="3.594916163s" podCreationTimestamp="2025-11-23 08:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:58:16.590190677 +0000 UTC m=+7680.287595456" watchObservedRunningTime="2025-11-23 08:58:16.594916163 +0000 UTC m=+7680.292320952" Nov 23 08:58:17 crc kubenswrapper[5028]: I1123 08:58:17.028697 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6349-account-create-b48mk"] Nov 23 08:58:17 crc kubenswrapper[5028]: I1123 08:58:17.036864 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6349-account-create-b48mk"] Nov 23 08:58:17 crc kubenswrapper[5028]: I1123 08:58:17.069712 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875a3c0a-11dc-40cd-a95a-6c6603fe13bb" path="/var/lib/kubelet/pods/875a3c0a-11dc-40cd-a95a-6c6603fe13bb/volumes" Nov 23 08:58:17 crc kubenswrapper[5028]: I1123 08:58:17.070711 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f399b58f-3799-485c-8746-f4a117f83149" path="/var/lib/kubelet/pods/f399b58f-3799-485c-8746-f4a117f83149/volumes" Nov 23 08:58:22 crc kubenswrapper[5028]: I1123 08:58:22.053669 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:58:22 crc kubenswrapper[5028]: E1123 08:58:22.054633 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.148326 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.228536 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b96c5b56f-shg2s"] Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.228904 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" containerName="dnsmasq-dns" containerID="cri-o://ce26072283d1a635f5388af7c8bb2e111d6268f579c8bcc9302d323edf1582cc" gracePeriod=10 Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.649232 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6654b5fc9-9p92f"] Nov 23 08:58:24 crc kubenswrapper[5028]: E1123 08:58:24.650225 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerName="init" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.650239 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerName="init" Nov 23 08:58:24 crc kubenswrapper[5028]: E1123 08:58:24.650288 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerName="dnsmasq-dns" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.650297 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerName="dnsmasq-dns" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.650611 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ba1b5b-aa76-49c0-a53b-23688d9bed3e" containerName="dnsmasq-dns" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.651862 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.665677 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6654b5fc9-9p92f"] Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.731027 5028 generic.go:334] "Generic (PLEG): container finished" podID="5a2802d6-5d73-4926-9011-150d8a757a17" containerID="ce26072283d1a635f5388af7c8bb2e111d6268f579c8bcc9302d323edf1582cc" exitCode=0 Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.731300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" event={"ID":"5a2802d6-5d73-4926-9011-150d8a757a17","Type":"ContainerDied","Data":"ce26072283d1a635f5388af7c8bb2e111d6268f579c8bcc9302d323edf1582cc"} Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.877760 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-config\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.878550 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-ovsdbserver-nb\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.878595 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mkn\" (UniqueName: \"kubernetes.io/projected/fc89900f-4d23-44c4-bbda-354bd9203efd-kube-api-access-w4mkn\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.878702 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-openstack-networker\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.878774 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-dns-svc\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.878827 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-ovsdbserver-sb\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:24 crc kubenswrapper[5028]: I1123 08:58:24.878927 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-openstack-cell1\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000412 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-ovsdbserver-nb\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000520 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mkn\" (UniqueName: \"kubernetes.io/projected/fc89900f-4d23-44c4-bbda-354bd9203efd-kube-api-access-w4mkn\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000591 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-openstack-networker\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000645 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-dns-svc\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000696 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-ovsdbserver-sb\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000766 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-openstack-cell1\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.000974 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-config\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.002268 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-openstack-networker\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.005949 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-config\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.010712 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-ovsdbserver-sb\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.012235 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-ovsdbserver-nb\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.012952 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-dns-svc\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.015359 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/fc89900f-4d23-44c4-bbda-354bd9203efd-openstack-cell1\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.032476 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.033027 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mkn\" (UniqueName: \"kubernetes.io/projected/fc89900f-4d23-44c4-bbda-354bd9203efd-kube-api-access-w4mkn\") pod \"dnsmasq-dns-6654b5fc9-9p92f\" (UID: \"fc89900f-4d23-44c4-bbda-354bd9203efd\") " pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.101538 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-config\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.102285 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th5d7\" (UniqueName: \"kubernetes.io/projected/5a2802d6-5d73-4926-9011-150d8a757a17-kube-api-access-th5d7\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.102379 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-dns-svc\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.102617 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-nb\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.102706 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-sb\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.102773 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-networker\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.102837 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-cell1\") pod \"5a2802d6-5d73-4926-9011-150d8a757a17\" (UID: \"5a2802d6-5d73-4926-9011-150d8a757a17\") " Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.141416 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2802d6-5d73-4926-9011-150d8a757a17-kube-api-access-th5d7" (OuterVolumeSpecName: "kube-api-access-th5d7") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "kube-api-access-th5d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.193965 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.196171 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-config" (OuterVolumeSpecName: "config") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.197853 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.205636 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.206073 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th5d7\" (UniqueName: \"kubernetes.io/projected/5a2802d6-5d73-4926-9011-150d8a757a17-kube-api-access-th5d7\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.206148 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.206207 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.210266 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.227325 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-networker" (OuterVolumeSpecName: "openstack-networker") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "openstack-networker". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.240357 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a2802d6-5d73-4926-9011-150d8a757a17" (UID: "5a2802d6-5d73-4926-9011-150d8a757a17"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.309131 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.309336 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.309431 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/5a2802d6-5d73-4926-9011-150d8a757a17-openstack-networker\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.321761 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.757635 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" event={"ID":"5a2802d6-5d73-4926-9011-150d8a757a17","Type":"ContainerDied","Data":"90ff6bf831f022b82a49944f6a92517d675131b3bc5f465e426c4c12161ca5ef"} Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.758518 5028 scope.go:117] "RemoveContainer" containerID="ce26072283d1a635f5388af7c8bb2e111d6268f579c8bcc9302d323edf1582cc" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.758757 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b96c5b56f-shg2s" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.789187 5028 scope.go:117] "RemoveContainer" containerID="25540fa983131169099384061565c8c404a90ddb676cae830ff9e5a558af90be" Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.806823 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b96c5b56f-shg2s"] Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.811174 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b96c5b56f-shg2s"] Nov 23 08:58:25 crc kubenswrapper[5028]: I1123 08:58:25.890521 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6654b5fc9-9p92f"] Nov 23 08:58:26 crc kubenswrapper[5028]: I1123 08:58:26.771113 5028 generic.go:334] "Generic (PLEG): container finished" podID="fc89900f-4d23-44c4-bbda-354bd9203efd" containerID="2cd02b2eac2d4ea7231cac8f6a7def910dc6cd2db66ac60b583088a6bbccdff7" exitCode=0 Nov 23 08:58:26 crc kubenswrapper[5028]: I1123 08:58:26.771193 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" event={"ID":"fc89900f-4d23-44c4-bbda-354bd9203efd","Type":"ContainerDied","Data":"2cd02b2eac2d4ea7231cac8f6a7def910dc6cd2db66ac60b583088a6bbccdff7"} Nov 23 08:58:26 crc kubenswrapper[5028]: I1123 08:58:26.771558 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" event={"ID":"fc89900f-4d23-44c4-bbda-354bd9203efd","Type":"ContainerStarted","Data":"0e59c5a8fb7bd6d8e3ab3f34f34cb31e8e0767e0493459db08ad083debec2a5f"} Nov 23 08:58:27 crc kubenswrapper[5028]: I1123 08:58:27.067538 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" path="/var/lib/kubelet/pods/5a2802d6-5d73-4926-9011-150d8a757a17/volumes" Nov 23 08:58:27 crc kubenswrapper[5028]: I1123 08:58:27.791259 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" event={"ID":"fc89900f-4d23-44c4-bbda-354bd9203efd","Type":"ContainerStarted","Data":"f83f5f2dea10d26a01f3bccc34af667d8846f43647be573031377b83186eecdb"} Nov 23 08:58:27 crc kubenswrapper[5028]: I1123 08:58:27.793125 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:27 crc kubenswrapper[5028]: I1123 08:58:27.813266 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" podStartSLOduration=3.813241732 podStartE2EDuration="3.813241732s" podCreationTimestamp="2025-11-23 08:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:58:27.810697359 +0000 UTC m=+7691.508102138" watchObservedRunningTime="2025-11-23 08:58:27.813241732 +0000 UTC m=+7691.510646511" Nov 23 08:58:30 crc kubenswrapper[5028]: I1123 08:58:30.491242 5028 scope.go:117] "RemoveContainer" containerID="bc664f4416770ad828ace2eb7e40df5e5a4fee1627dc37131dc742db729a751b" Nov 23 08:58:30 crc kubenswrapper[5028]: I1123 08:58:30.525197 5028 scope.go:117] "RemoveContainer" containerID="4005c1c24ae8ab2407603bc1a423c2c78ff666201a50d947ce6249698638f257" Nov 23 08:58:30 crc kubenswrapper[5028]: I1123 08:58:30.581195 5028 scope.go:117] "RemoveContainer" containerID="da32e9b6731e66dd4dcbdd69d3f3ad9439794dfa945df0919fd4fa3f8e80fda2" Nov 23 08:58:30 crc kubenswrapper[5028]: I1123 08:58:30.656081 5028 scope.go:117] "RemoveContainer" containerID="1a764db8ca5db41eb2a563a5cf1c101953c603454a86e438477225207cf7826a" Nov 23 08:58:30 crc kubenswrapper[5028]: I1123 08:58:30.683856 5028 scope.go:117] "RemoveContainer" containerID="29d996c0782cdbd9f4641460d899cdac818e741d62fbbb2cfa597adfd8b090c1" Nov 23 08:58:35 crc kubenswrapper[5028]: I1123 08:58:35.323213 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6654b5fc9-9p92f" Nov 23 08:58:35 crc kubenswrapper[5028]: I1123 08:58:35.611196 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65597c4885-rtdk8"] Nov 23 08:58:35 crc kubenswrapper[5028]: I1123 08:58:35.611649 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerName="dnsmasq-dns" containerID="cri-o://a521665591836c6a56e9cbd56fa0a915ce4802474dda740eb9eed0659e8dad9a" gracePeriod=10 Nov 23 08:58:35 crc kubenswrapper[5028]: I1123 08:58:35.909316 5028 generic.go:334] "Generic (PLEG): container finished" podID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerID="a521665591836c6a56e9cbd56fa0a915ce4802474dda740eb9eed0659e8dad9a" exitCode=0 Nov 23 08:58:35 crc kubenswrapper[5028]: I1123 08:58:35.909392 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" event={"ID":"c93cab81-1ff9-41ba-b339-8c0ab46d6737","Type":"ContainerDied","Data":"a521665591836c6a56e9cbd56fa0a915ce4802474dda740eb9eed0659e8dad9a"} Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.053876 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:58:36 crc kubenswrapper[5028]: E1123 08:58:36.054334 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.147283 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.203756 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-config\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.203891 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-cell1\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.203986 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-dns-svc\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.204123 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr6vq\" (UniqueName: \"kubernetes.io/projected/c93cab81-1ff9-41ba-b339-8c0ab46d6737-kube-api-access-wr6vq\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.204160 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-networker\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.204189 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-sb\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.204213 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-nb\") pod \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\" (UID: \"c93cab81-1ff9-41ba-b339-8c0ab46d6737\") " Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.223661 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93cab81-1ff9-41ba-b339-8c0ab46d6737-kube-api-access-wr6vq" (OuterVolumeSpecName: "kube-api-access-wr6vq") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "kube-api-access-wr6vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.275735 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.305745 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-config" (OuterVolumeSpecName: "config") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.306204 5028 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-config\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.306233 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr6vq\" (UniqueName: \"kubernetes.io/projected/c93cab81-1ff9-41ba-b339-8c0ab46d6737-kube-api-access-wr6vq\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.306247 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.308811 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.309134 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.316124 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-networker" (OuterVolumeSpecName: "openstack-networker") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "openstack-networker". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.317601 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "c93cab81-1ff9-41ba-b339-8c0ab46d6737" (UID: "c93cab81-1ff9-41ba-b339-8c0ab46d6737"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.409632 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-networker\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-networker\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.409678 5028 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.409708 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.409719 5028 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c93cab81-1ff9-41ba-b339-8c0ab46d6737-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.923109 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" event={"ID":"c93cab81-1ff9-41ba-b339-8c0ab46d6737","Type":"ContainerDied","Data":"42c45a6ee994beaba04dd80f8cd9633f99897c2d2cbd1c99fba5754c55a2a13d"} Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.923209 5028 scope.go:117] "RemoveContainer" containerID="a521665591836c6a56e9cbd56fa0a915ce4802474dda740eb9eed0659e8dad9a" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.923212 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65597c4885-rtdk8" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.955603 5028 scope.go:117] "RemoveContainer" containerID="763367f9fdff033b8773f65ff02dc7dd3d17856c668dc5fd5b4d5177a4730e91" Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.962464 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65597c4885-rtdk8"] Nov 23 08:58:36 crc kubenswrapper[5028]: I1123 08:58:36.975457 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65597c4885-rtdk8"] Nov 23 08:58:37 crc kubenswrapper[5028]: I1123 08:58:37.065913 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" path="/var/lib/kubelet/pods/c93cab81-1ff9-41ba-b339-8c0ab46d6737/volumes" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.055271 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:58:49 crc kubenswrapper[5028]: E1123 08:58:49.056799 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.842958 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb"] Nov 23 08:58:49 crc kubenswrapper[5028]: E1123 08:58:49.843503 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" containerName="init" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.843530 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" containerName="init" Nov 23 08:58:49 crc kubenswrapper[5028]: E1123 08:58:49.843542 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerName="dnsmasq-dns" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.843552 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerName="dnsmasq-dns" Nov 23 08:58:49 crc kubenswrapper[5028]: E1123 08:58:49.843585 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" containerName="dnsmasq-dns" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.843594 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" containerName="dnsmasq-dns" Nov 23 08:58:49 crc kubenswrapper[5028]: E1123 08:58:49.843616 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerName="init" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.843623 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerName="init" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.843842 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93cab81-1ff9-41ba-b339-8c0ab46d6737" containerName="dnsmasq-dns" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.843869 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2802d6-5d73-4926-9011-150d8a757a17" containerName="dnsmasq-dns" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.844817 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.858728 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.858783 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.858937 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.859504 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.873159 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb"] Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.884292 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz"] Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.886260 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.890917 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.891064 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.905411 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz"] Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.975708 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.975814 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.975893 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.975967 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ph2m\" (UniqueName: \"kubernetes.io/projected/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-kube-api-access-2ph2m\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.976110 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.976258 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmz6c\" (UniqueName: \"kubernetes.io/projected/4190418b-300b-449e-9219-bf0d0aec75c6-kube-api-access-nmz6c\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.976398 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.976463 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:49 crc kubenswrapper[5028]: I1123 08:58:49.976679 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079246 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079346 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmz6c\" (UniqueName: \"kubernetes.io/projected/4190418b-300b-449e-9219-bf0d0aec75c6-kube-api-access-nmz6c\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079415 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079575 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079769 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079891 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ph2m\" (UniqueName: \"kubernetes.io/projected/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-kube-api-access-2ph2m\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.079925 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.091503 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.091564 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.091698 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.093244 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.095818 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.100668 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmz6c\" (UniqueName: \"kubernetes.io/projected/4190418b-300b-449e-9219-bf0d0aec75c6-kube-api-access-nmz6c\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.102498 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.103317 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-clmltb\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.104103 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ph2m\" (UniqueName: \"kubernetes.io/projected/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-kube-api-access-2ph2m\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.179936 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.229028 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.847417 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb"] Nov 23 08:58:50 crc kubenswrapper[5028]: I1123 08:58:50.917494 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz"] Nov 23 08:58:51 crc kubenswrapper[5028]: I1123 08:58:51.100188 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" event={"ID":"4190418b-300b-449e-9219-bf0d0aec75c6","Type":"ContainerStarted","Data":"0e39ed85a58bd138b0890fa5d9e996ffbce175a3dd9e88fbe631dcdb0d0c908e"} Nov 23 08:58:51 crc kubenswrapper[5028]: I1123 08:58:51.102381 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" event={"ID":"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9","Type":"ContainerStarted","Data":"db4968a7193180e89564af35ddff47b15ef114d42f1f75a91941b39fc0f16a55"} Nov 23 08:58:56 crc kubenswrapper[5028]: I1123 08:58:56.065452 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nmzgg"] Nov 23 08:58:56 crc kubenswrapper[5028]: I1123 08:58:56.078113 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nmzgg"] Nov 23 08:58:57 crc kubenswrapper[5028]: I1123 08:58:57.072371 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0efee3-edd3-49be-b488-e32c46214d32" path="/var/lib/kubelet/pods/ff0efee3-edd3-49be-b488-e32c46214d32/volumes" Nov 23 08:59:00 crc kubenswrapper[5028]: I1123 08:59:00.054045 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:59:00 crc kubenswrapper[5028]: E1123 08:59:00.054799 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:59:03 crc kubenswrapper[5028]: I1123 08:59:03.264610 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" event={"ID":"4190418b-300b-449e-9219-bf0d0aec75c6","Type":"ContainerStarted","Data":"1cb796988b811db43668689e57e96f07d92674bdd388ee2a4e039c37f51e79a6"} Nov 23 08:59:03 crc kubenswrapper[5028]: I1123 08:59:03.270539 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" event={"ID":"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9","Type":"ContainerStarted","Data":"d71da8cf69f53fc5514b25fd1114f80921284c82eb406f200be04b7150035626"} Nov 23 08:59:03 crc kubenswrapper[5028]: I1123 08:59:03.288216 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" podStartSLOduration=2.991424265 podStartE2EDuration="14.288193946s" podCreationTimestamp="2025-11-23 08:58:49 +0000 UTC" firstStartedPulling="2025-11-23 08:58:50.854117201 +0000 UTC m=+7714.551521980" lastFinishedPulling="2025-11-23 08:59:02.150886882 +0000 UTC m=+7725.848291661" observedRunningTime="2025-11-23 08:59:03.286917325 +0000 UTC m=+7726.984322124" watchObservedRunningTime="2025-11-23 08:59:03.288193946 +0000 UTC m=+7726.985598725" Nov 23 08:59:03 crc kubenswrapper[5028]: I1123 08:59:03.323172 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" podStartSLOduration=3.080760476 podStartE2EDuration="14.323146547s" podCreationTimestamp="2025-11-23 08:58:49 +0000 UTC" firstStartedPulling="2025-11-23 08:58:50.933552268 +0000 UTC m=+7714.630957047" lastFinishedPulling="2025-11-23 08:59:02.175938299 +0000 UTC m=+7725.873343118" observedRunningTime="2025-11-23 08:59:03.312155206 +0000 UTC m=+7727.009559985" watchObservedRunningTime="2025-11-23 08:59:03.323146547 +0000 UTC m=+7727.020551326" Nov 23 08:59:13 crc kubenswrapper[5028]: I1123 08:59:13.392823 5028 generic.go:334] "Generic (PLEG): container finished" podID="32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" containerID="d71da8cf69f53fc5514b25fd1114f80921284c82eb406f200be04b7150035626" exitCode=0 Nov 23 08:59:13 crc kubenswrapper[5028]: I1123 08:59:13.392902 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" event={"ID":"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9","Type":"ContainerDied","Data":"d71da8cf69f53fc5514b25fd1114f80921284c82eb406f200be04b7150035626"} Nov 23 08:59:14 crc kubenswrapper[5028]: I1123 08:59:14.053549 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:59:14 crc kubenswrapper[5028]: E1123 08:59:14.053936 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:59:14 crc kubenswrapper[5028]: I1123 08:59:14.409734 5028 generic.go:334] "Generic (PLEG): container finished" podID="4190418b-300b-449e-9219-bf0d0aec75c6" containerID="1cb796988b811db43668689e57e96f07d92674bdd388ee2a4e039c37f51e79a6" exitCode=0 Nov 23 08:59:14 crc kubenswrapper[5028]: I1123 08:59:14.409836 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" event={"ID":"4190418b-300b-449e-9219-bf0d0aec75c6","Type":"ContainerDied","Data":"1cb796988b811db43668689e57e96f07d92674bdd388ee2a4e039c37f51e79a6"} Nov 23 08:59:14 crc kubenswrapper[5028]: I1123 08:59:14.994911 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.092975 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-inventory\") pod \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.093111 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ph2m\" (UniqueName: \"kubernetes.io/projected/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-kube-api-access-2ph2m\") pod \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.093211 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-pre-adoption-validation-combined-ca-bundle\") pod \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.093427 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-ssh-key\") pod \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\" (UID: \"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9\") " Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.108162 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" (UID: "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.108236 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-kube-api-access-2ph2m" (OuterVolumeSpecName: "kube-api-access-2ph2m") pod "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" (UID: "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9"). InnerVolumeSpecName "kube-api-access-2ph2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.137677 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-inventory" (OuterVolumeSpecName: "inventory") pod "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" (UID: "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.148229 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" (UID: "32cb05a6-b2bc-4434-a7eb-9aae488e4dc9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.199626 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.200134 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.200148 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ph2m\" (UniqueName: \"kubernetes.io/projected/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-kube-api-access-2ph2m\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.200159 5028 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cb05a6-b2bc-4434-a7eb-9aae488e4dc9-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.425657 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.425667 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz" event={"ID":"32cb05a6-b2bc-4434-a7eb-9aae488e4dc9","Type":"ContainerDied","Data":"db4968a7193180e89564af35ddff47b15ef114d42f1f75a91941b39fc0f16a55"} Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.425825 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4968a7193180e89564af35ddff47b15ef114d42f1f75a91941b39fc0f16a55" Nov 23 08:59:15 crc kubenswrapper[5028]: I1123 08:59:15.853910 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.020398 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-pre-adoption-validation-combined-ca-bundle\") pod \"4190418b-300b-449e-9219-bf0d0aec75c6\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.020660 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmz6c\" (UniqueName: \"kubernetes.io/projected/4190418b-300b-449e-9219-bf0d0aec75c6-kube-api-access-nmz6c\") pod \"4190418b-300b-449e-9219-bf0d0aec75c6\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.020720 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ssh-key\") pod \"4190418b-300b-449e-9219-bf0d0aec75c6\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.020812 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-inventory\") pod \"4190418b-300b-449e-9219-bf0d0aec75c6\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.020931 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ceph\") pod \"4190418b-300b-449e-9219-bf0d0aec75c6\" (UID: \"4190418b-300b-449e-9219-bf0d0aec75c6\") " Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.028551 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ceph" (OuterVolumeSpecName: "ceph") pod "4190418b-300b-449e-9219-bf0d0aec75c6" (UID: "4190418b-300b-449e-9219-bf0d0aec75c6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.028733 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "4190418b-300b-449e-9219-bf0d0aec75c6" (UID: "4190418b-300b-449e-9219-bf0d0aec75c6"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.029001 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4190418b-300b-449e-9219-bf0d0aec75c6-kube-api-access-nmz6c" (OuterVolumeSpecName: "kube-api-access-nmz6c") pod "4190418b-300b-449e-9219-bf0d0aec75c6" (UID: "4190418b-300b-449e-9219-bf0d0aec75c6"). InnerVolumeSpecName "kube-api-access-nmz6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.062564 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4190418b-300b-449e-9219-bf0d0aec75c6" (UID: "4190418b-300b-449e-9219-bf0d0aec75c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.062639 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-inventory" (OuterVolumeSpecName: "inventory") pod "4190418b-300b-449e-9219-bf0d0aec75c6" (UID: "4190418b-300b-449e-9219-bf0d0aec75c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.124869 5028 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.125026 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmz6c\" (UniqueName: \"kubernetes.io/projected/4190418b-300b-449e-9219-bf0d0aec75c6-kube-api-access-nmz6c\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.125057 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.125075 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.125091 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4190418b-300b-449e-9219-bf0d0aec75c6-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.440037 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" event={"ID":"4190418b-300b-449e-9219-bf0d0aec75c6","Type":"ContainerDied","Data":"0e39ed85a58bd138b0890fa5d9e996ffbce175a3dd9e88fbe631dcdb0d0c908e"} Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.440107 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e39ed85a58bd138b0890fa5d9e996ffbce175a3dd9e88fbe631dcdb0d0c908e" Nov 23 08:59:16 crc kubenswrapper[5028]: I1123 08:59:16.440197 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-clmltb" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.012026 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58"] Nov 23 08:59:18 crc kubenswrapper[5028]: E1123 08:59:18.013250 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4190418b-300b-449e-9219-bf0d0aec75c6" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.013279 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="4190418b-300b-449e-9219-bf0d0aec75c6" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 23 08:59:18 crc kubenswrapper[5028]: E1123 08:59:18.013300 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.013313 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.013638 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cb05a6-b2bc-4434-a7eb-9aae488e4dc9" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-networ" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.013748 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="4190418b-300b-449e-9219-bf0d0aec75c6" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.015214 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.018467 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.018636 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.018663 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.018797 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.022473 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58"] Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.102698 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4"] Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.104797 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.107678 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.109683 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.142171 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4"] Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182306 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182411 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182555 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182596 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqk8d\" (UniqueName: \"kubernetes.io/projected/c3203607-0919-4770-9464-326d5b95d8ad-kube-api-access-bqk8d\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182642 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182693 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182743 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182765 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g745\" (UniqueName: \"kubernetes.io/projected/1da50721-0bdc-4704-9a11-99c1b786a8bc-kube-api-access-8g745\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.182835 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285213 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285267 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqk8d\" (UniqueName: \"kubernetes.io/projected/c3203607-0919-4770-9464-326d5b95d8ad-kube-api-access-bqk8d\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285312 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285356 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285392 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285544 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g745\" (UniqueName: \"kubernetes.io/projected/1da50721-0bdc-4704-9a11-99c1b786a8bc-kube-api-access-8g745\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285582 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285624 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.285670 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.292792 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.294799 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.299406 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.300515 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.301068 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.301682 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.304264 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.305070 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqk8d\" (UniqueName: \"kubernetes.io/projected/c3203607-0919-4770-9464-326d5b95d8ad-kube-api-access-bqk8d\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.305708 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g745\" (UniqueName: \"kubernetes.io/projected/1da50721-0bdc-4704-9a11-99c1b786a8bc-kube-api-access-8g745\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.339051 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.435007 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 08:59:18 crc kubenswrapper[5028]: I1123 08:59:18.978602 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58"] Nov 23 08:59:19 crc kubenswrapper[5028]: W1123 08:59:19.119502 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da50721_0bdc_4704_9a11_99c1b786a8bc.slice/crio-dc37e5559042f310e456224418add12df2f39fabc9788aef9eab866689117ba7 WatchSource:0}: Error finding container dc37e5559042f310e456224418add12df2f39fabc9788aef9eab866689117ba7: Status 404 returned error can't find the container with id dc37e5559042f310e456224418add12df2f39fabc9788aef9eab866689117ba7 Nov 23 08:59:19 crc kubenswrapper[5028]: I1123 08:59:19.121176 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4"] Nov 23 08:59:19 crc kubenswrapper[5028]: I1123 08:59:19.488234 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" event={"ID":"1da50721-0bdc-4704-9a11-99c1b786a8bc","Type":"ContainerStarted","Data":"dc37e5559042f310e456224418add12df2f39fabc9788aef9eab866689117ba7"} Nov 23 08:59:19 crc kubenswrapper[5028]: I1123 08:59:19.489817 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" event={"ID":"c3203607-0919-4770-9464-326d5b95d8ad","Type":"ContainerStarted","Data":"5789270d3c46942d5eec58aeb6266334e649b3a2acad9a0e3d851ce9283fad51"} Nov 23 08:59:20 crc kubenswrapper[5028]: I1123 08:59:20.506453 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" event={"ID":"1da50721-0bdc-4704-9a11-99c1b786a8bc","Type":"ContainerStarted","Data":"9edaa3e7b55f85ac4484ba78ca080c718055116e378150832fc4bfa976baa4df"} Nov 23 08:59:20 crc kubenswrapper[5028]: I1123 08:59:20.508788 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" event={"ID":"c3203607-0919-4770-9464-326d5b95d8ad","Type":"ContainerStarted","Data":"9bc52ecbd6786e9e8dffab98fe74b05df5ae9bed08b1831d8af348a6eb88d615"} Nov 23 08:59:20 crc kubenswrapper[5028]: I1123 08:59:20.531599 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" podStartSLOduration=2.097217075 podStartE2EDuration="2.531575183s" podCreationTimestamp="2025-11-23 08:59:18 +0000 UTC" firstStartedPulling="2025-11-23 08:59:19.123853379 +0000 UTC m=+7742.821258168" lastFinishedPulling="2025-11-23 08:59:19.558211467 +0000 UTC m=+7743.255616276" observedRunningTime="2025-11-23 08:59:20.523408552 +0000 UTC m=+7744.220813331" watchObservedRunningTime="2025-11-23 08:59:20.531575183 +0000 UTC m=+7744.228979972" Nov 23 08:59:20 crc kubenswrapper[5028]: I1123 08:59:20.561367 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" podStartSLOduration=3.133902698 podStartE2EDuration="3.561343856s" podCreationTimestamp="2025-11-23 08:59:17 +0000 UTC" firstStartedPulling="2025-11-23 08:59:18.982187969 +0000 UTC m=+7742.679592748" lastFinishedPulling="2025-11-23 08:59:19.409629127 +0000 UTC m=+7743.107033906" observedRunningTime="2025-11-23 08:59:20.554205511 +0000 UTC m=+7744.251610330" watchObservedRunningTime="2025-11-23 08:59:20.561343856 +0000 UTC m=+7744.258748645" Nov 23 08:59:27 crc kubenswrapper[5028]: I1123 08:59:27.079397 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:59:27 crc kubenswrapper[5028]: E1123 08:59:27.080751 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 08:59:31 crc kubenswrapper[5028]: I1123 08:59:31.082096 5028 scope.go:117] "RemoveContainer" containerID="5679508187f4f87e15a973e7e7197521ada44005dd11eb47cb3477d71271a75e" Nov 23 08:59:40 crc kubenswrapper[5028]: I1123 08:59:40.054200 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 08:59:40 crc kubenswrapper[5028]: I1123 08:59:40.775407 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"d25c2dc03a95af52955954eb895f6761916fc2acc6d21a4b1229867a82751002"} Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.168348 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd"] Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.174142 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.178813 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.179006 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.204384 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd"] Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.300817 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zll9\" (UniqueName: \"kubernetes.io/projected/3c902c23-4d0e-451c-809d-d26e6ce797fe-kube-api-access-9zll9\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.301314 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c902c23-4d0e-451c-809d-d26e6ce797fe-config-volume\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.301396 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c902c23-4d0e-451c-809d-d26e6ce797fe-secret-volume\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.404566 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zll9\" (UniqueName: \"kubernetes.io/projected/3c902c23-4d0e-451c-809d-d26e6ce797fe-kube-api-access-9zll9\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.404709 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c902c23-4d0e-451c-809d-d26e6ce797fe-config-volume\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.404840 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c902c23-4d0e-451c-809d-d26e6ce797fe-secret-volume\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.406784 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c902c23-4d0e-451c-809d-d26e6ce797fe-config-volume\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.415689 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c902c23-4d0e-451c-809d-d26e6ce797fe-secret-volume\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.437942 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zll9\" (UniqueName: \"kubernetes.io/projected/3c902c23-4d0e-451c-809d-d26e6ce797fe-kube-api-access-9zll9\") pod \"collect-profiles-29398140-7kbtd\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:00 crc kubenswrapper[5028]: I1123 09:00:00.511678 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:01 crc kubenswrapper[5028]: I1123 09:00:01.084201 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd"] Nov 23 09:00:02 crc kubenswrapper[5028]: I1123 09:00:02.067387 5028 generic.go:334] "Generic (PLEG): container finished" podID="3c902c23-4d0e-451c-809d-d26e6ce797fe" containerID="f7a46fe3d6c0c875fa496bafab5494c80f159b333ae1cc5a254ab325e058aac8" exitCode=0 Nov 23 09:00:02 crc kubenswrapper[5028]: I1123 09:00:02.067453 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" event={"ID":"3c902c23-4d0e-451c-809d-d26e6ce797fe","Type":"ContainerDied","Data":"f7a46fe3d6c0c875fa496bafab5494c80f159b333ae1cc5a254ab325e058aac8"} Nov 23 09:00:02 crc kubenswrapper[5028]: I1123 09:00:02.067493 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" event={"ID":"3c902c23-4d0e-451c-809d-d26e6ce797fe","Type":"ContainerStarted","Data":"04ef74e87446180805f32a49096468d8c648f5e6bc8af87d8b2967215c67e43b"} Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.467433 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.507168 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c902c23-4d0e-451c-809d-d26e6ce797fe-config-volume\") pod \"3c902c23-4d0e-451c-809d-d26e6ce797fe\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.507258 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zll9\" (UniqueName: \"kubernetes.io/projected/3c902c23-4d0e-451c-809d-d26e6ce797fe-kube-api-access-9zll9\") pod \"3c902c23-4d0e-451c-809d-d26e6ce797fe\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.507411 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c902c23-4d0e-451c-809d-d26e6ce797fe-secret-volume\") pod \"3c902c23-4d0e-451c-809d-d26e6ce797fe\" (UID: \"3c902c23-4d0e-451c-809d-d26e6ce797fe\") " Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.508867 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c902c23-4d0e-451c-809d-d26e6ce797fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c902c23-4d0e-451c-809d-d26e6ce797fe" (UID: "3c902c23-4d0e-451c-809d-d26e6ce797fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.515199 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c902c23-4d0e-451c-809d-d26e6ce797fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c902c23-4d0e-451c-809d-d26e6ce797fe" (UID: "3c902c23-4d0e-451c-809d-d26e6ce797fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.515497 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c902c23-4d0e-451c-809d-d26e6ce797fe-kube-api-access-9zll9" (OuterVolumeSpecName: "kube-api-access-9zll9") pod "3c902c23-4d0e-451c-809d-d26e6ce797fe" (UID: "3c902c23-4d0e-451c-809d-d26e6ce797fe"). InnerVolumeSpecName "kube-api-access-9zll9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.609630 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c902c23-4d0e-451c-809d-d26e6ce797fe-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.609665 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zll9\" (UniqueName: \"kubernetes.io/projected/3c902c23-4d0e-451c-809d-d26e6ce797fe-kube-api-access-9zll9\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:03 crc kubenswrapper[5028]: I1123 09:00:03.609674 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c902c23-4d0e-451c-809d-d26e6ce797fe-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:00:04 crc kubenswrapper[5028]: I1123 09:00:04.094359 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" event={"ID":"3c902c23-4d0e-451c-809d-d26e6ce797fe","Type":"ContainerDied","Data":"04ef74e87446180805f32a49096468d8c648f5e6bc8af87d8b2967215c67e43b"} Nov 23 09:00:04 crc kubenswrapper[5028]: I1123 09:00:04.094400 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ef74e87446180805f32a49096468d8c648f5e6bc8af87d8b2967215c67e43b" Nov 23 09:00:04 crc kubenswrapper[5028]: I1123 09:00:04.094421 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd" Nov 23 09:00:04 crc kubenswrapper[5028]: I1123 09:00:04.550738 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62"] Nov 23 09:00:04 crc kubenswrapper[5028]: I1123 09:00:04.561804 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398095-mnp62"] Nov 23 09:00:05 crc kubenswrapper[5028]: I1123 09:00:05.067164 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6975b8a2-9360-4d2d-bee0-fc44b3896b87" path="/var/lib/kubelet/pods/6975b8a2-9360-4d2d-bee0-fc44b3896b87/volumes" Nov 23 09:00:31 crc kubenswrapper[5028]: I1123 09:00:31.225308 5028 scope.go:117] "RemoveContainer" containerID="13ab57c4be751a6e5cab4e1f4b7be17b9bd8f3aae2180a669fed061fb0a53d18" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.202911 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29398141-vpp8f"] Nov 23 09:01:00 crc kubenswrapper[5028]: E1123 09:01:00.204230 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c902c23-4d0e-451c-809d-d26e6ce797fe" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.204253 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c902c23-4d0e-451c-809d-d26e6ce797fe" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.204575 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c902c23-4d0e-451c-809d-d26e6ce797fe" containerName="collect-profiles" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.205771 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.222003 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398141-vpp8f"] Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.296300 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-combined-ca-bundle\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.296516 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-fernet-keys\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.296603 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncvsl\" (UniqueName: \"kubernetes.io/projected/acd0873c-30c3-44dc-a1d3-0d7837dac457-kube-api-access-ncvsl\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.296768 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-config-data\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.400423 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-combined-ca-bundle\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.401477 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-fernet-keys\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.401610 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncvsl\" (UniqueName: \"kubernetes.io/projected/acd0873c-30c3-44dc-a1d3-0d7837dac457-kube-api-access-ncvsl\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.401664 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-config-data\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.412756 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-fernet-keys\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.413891 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-config-data\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.414441 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-combined-ca-bundle\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.431156 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncvsl\" (UniqueName: \"kubernetes.io/projected/acd0873c-30c3-44dc-a1d3-0d7837dac457-kube-api-access-ncvsl\") pod \"keystone-cron-29398141-vpp8f\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:00 crc kubenswrapper[5028]: I1123 09:01:00.536986 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:01 crc kubenswrapper[5028]: I1123 09:01:01.027428 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398141-vpp8f"] Nov 23 09:01:01 crc kubenswrapper[5028]: I1123 09:01:01.824144 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vpp8f" event={"ID":"acd0873c-30c3-44dc-a1d3-0d7837dac457","Type":"ContainerStarted","Data":"9e4de5edd9d73ce845c5d74a0b422bc8b2fe859a2dd35a373680a164809b4527"} Nov 23 09:01:01 crc kubenswrapper[5028]: I1123 09:01:01.825057 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vpp8f" event={"ID":"acd0873c-30c3-44dc-a1d3-0d7837dac457","Type":"ContainerStarted","Data":"6345e4328440593032769f935be658beaed1d35343b4eaf95ae4bead07776c84"} Nov 23 09:01:01 crc kubenswrapper[5028]: I1123 09:01:01.862471 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29398141-vpp8f" podStartSLOduration=1.862445358 podStartE2EDuration="1.862445358s" podCreationTimestamp="2025-11-23 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:01.846361062 +0000 UTC m=+7845.543765881" watchObservedRunningTime="2025-11-23 09:01:01.862445358 +0000 UTC m=+7845.559850147" Nov 23 09:01:04 crc kubenswrapper[5028]: I1123 09:01:04.886255 5028 generic.go:334] "Generic (PLEG): container finished" podID="acd0873c-30c3-44dc-a1d3-0d7837dac457" containerID="9e4de5edd9d73ce845c5d74a0b422bc8b2fe859a2dd35a373680a164809b4527" exitCode=0 Nov 23 09:01:04 crc kubenswrapper[5028]: I1123 09:01:04.886355 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vpp8f" event={"ID":"acd0873c-30c3-44dc-a1d3-0d7837dac457","Type":"ContainerDied","Data":"9e4de5edd9d73ce845c5d74a0b422bc8b2fe859a2dd35a373680a164809b4527"} Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.368345 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.460105 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-config-data\") pod \"acd0873c-30c3-44dc-a1d3-0d7837dac457\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.460185 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-fernet-keys\") pod \"acd0873c-30c3-44dc-a1d3-0d7837dac457\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.460420 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncvsl\" (UniqueName: \"kubernetes.io/projected/acd0873c-30c3-44dc-a1d3-0d7837dac457-kube-api-access-ncvsl\") pod \"acd0873c-30c3-44dc-a1d3-0d7837dac457\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.460612 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-combined-ca-bundle\") pod \"acd0873c-30c3-44dc-a1d3-0d7837dac457\" (UID: \"acd0873c-30c3-44dc-a1d3-0d7837dac457\") " Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.481688 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "acd0873c-30c3-44dc-a1d3-0d7837dac457" (UID: "acd0873c-30c3-44dc-a1d3-0d7837dac457"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.481790 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd0873c-30c3-44dc-a1d3-0d7837dac457-kube-api-access-ncvsl" (OuterVolumeSpecName: "kube-api-access-ncvsl") pod "acd0873c-30c3-44dc-a1d3-0d7837dac457" (UID: "acd0873c-30c3-44dc-a1d3-0d7837dac457"). InnerVolumeSpecName "kube-api-access-ncvsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.496297 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acd0873c-30c3-44dc-a1d3-0d7837dac457" (UID: "acd0873c-30c3-44dc-a1d3-0d7837dac457"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.528385 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-config-data" (OuterVolumeSpecName: "config-data") pod "acd0873c-30c3-44dc-a1d3-0d7837dac457" (UID: "acd0873c-30c3-44dc-a1d3-0d7837dac457"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.563628 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.563675 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.563688 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/acd0873c-30c3-44dc-a1d3-0d7837dac457-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.563702 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncvsl\" (UniqueName: \"kubernetes.io/projected/acd0873c-30c3-44dc-a1d3-0d7837dac457-kube-api-access-ncvsl\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.914910 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398141-vpp8f" event={"ID":"acd0873c-30c3-44dc-a1d3-0d7837dac457","Type":"ContainerDied","Data":"6345e4328440593032769f935be658beaed1d35343b4eaf95ae4bead07776c84"} Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.915066 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6345e4328440593032769f935be658beaed1d35343b4eaf95ae4bead07776c84" Nov 23 09:01:06 crc kubenswrapper[5028]: I1123 09:01:06.915131 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398141-vpp8f" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.392367 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lq7t4"] Nov 23 09:01:32 crc kubenswrapper[5028]: E1123 09:01:32.394059 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd0873c-30c3-44dc-a1d3-0d7837dac457" containerName="keystone-cron" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.394084 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd0873c-30c3-44dc-a1d3-0d7837dac457" containerName="keystone-cron" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.394467 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd0873c-30c3-44dc-a1d3-0d7837dac457" containerName="keystone-cron" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.397436 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.402239 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq7t4"] Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.464068 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-catalog-content\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.464262 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-utilities\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.464440 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7vd\" (UniqueName: \"kubernetes.io/projected/e62d24e6-b636-47d6-aaf1-7f554066c0c5-kube-api-access-7t7vd\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.567419 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7vd\" (UniqueName: \"kubernetes.io/projected/e62d24e6-b636-47d6-aaf1-7f554066c0c5-kube-api-access-7t7vd\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.567558 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-catalog-content\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.567742 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-utilities\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.568319 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-catalog-content\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.568530 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-utilities\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.590877 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7vd\" (UniqueName: \"kubernetes.io/projected/e62d24e6-b636-47d6-aaf1-7f554066c0c5-kube-api-access-7t7vd\") pod \"certified-operators-lq7t4\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:32 crc kubenswrapper[5028]: I1123 09:01:32.765572 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:33 crc kubenswrapper[5028]: I1123 09:01:33.326414 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq7t4"] Nov 23 09:01:33 crc kubenswrapper[5028]: W1123 09:01:33.333471 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode62d24e6_b636_47d6_aaf1_7f554066c0c5.slice/crio-246deb479cb12cf79941acdad349422b53c5cb39b9b1bd9a077c7da766e4101d WatchSource:0}: Error finding container 246deb479cb12cf79941acdad349422b53c5cb39b9b1bd9a077c7da766e4101d: Status 404 returned error can't find the container with id 246deb479cb12cf79941acdad349422b53c5cb39b9b1bd9a077c7da766e4101d Nov 23 09:01:34 crc kubenswrapper[5028]: I1123 09:01:34.260315 5028 generic.go:334] "Generic (PLEG): container finished" podID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerID="f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe" exitCode=0 Nov 23 09:01:34 crc kubenswrapper[5028]: I1123 09:01:34.260379 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerDied","Data":"f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe"} Nov 23 09:01:34 crc kubenswrapper[5028]: I1123 09:01:34.261228 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerStarted","Data":"246deb479cb12cf79941acdad349422b53c5cb39b9b1bd9a077c7da766e4101d"} Nov 23 09:01:36 crc kubenswrapper[5028]: I1123 09:01:36.285529 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerStarted","Data":"ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3"} Nov 23 09:01:38 crc kubenswrapper[5028]: I1123 09:01:38.331669 5028 generic.go:334] "Generic (PLEG): container finished" podID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerID="ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3" exitCode=0 Nov 23 09:01:38 crc kubenswrapper[5028]: I1123 09:01:38.331888 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerDied","Data":"ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3"} Nov 23 09:01:38 crc kubenswrapper[5028]: I1123 09:01:38.336131 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:01:39 crc kubenswrapper[5028]: I1123 09:01:39.349109 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerStarted","Data":"10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294"} Nov 23 09:01:39 crc kubenswrapper[5028]: I1123 09:01:39.383817 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lq7t4" podStartSLOduration=2.859310261 podStartE2EDuration="7.383785796s" podCreationTimestamp="2025-11-23 09:01:32 +0000 UTC" firstStartedPulling="2025-11-23 09:01:34.263431305 +0000 UTC m=+7877.960836084" lastFinishedPulling="2025-11-23 09:01:38.78790684 +0000 UTC m=+7882.485311619" observedRunningTime="2025-11-23 09:01:39.374920226 +0000 UTC m=+7883.072325045" watchObservedRunningTime="2025-11-23 09:01:39.383785796 +0000 UTC m=+7883.081190575" Nov 23 09:01:42 crc kubenswrapper[5028]: I1123 09:01:42.766845 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:42 crc kubenswrapper[5028]: I1123 09:01:42.767665 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:42 crc kubenswrapper[5028]: I1123 09:01:42.817002 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:52 crc kubenswrapper[5028]: I1123 09:01:52.817773 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:52 crc kubenswrapper[5028]: I1123 09:01:52.868828 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq7t4"] Nov 23 09:01:53 crc kubenswrapper[5028]: I1123 09:01:53.506079 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lq7t4" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="registry-server" containerID="cri-o://10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294" gracePeriod=2 Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.059684 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.099908 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-catalog-content\") pod \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.100263 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-utilities\") pod \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.100347 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7vd\" (UniqueName: \"kubernetes.io/projected/e62d24e6-b636-47d6-aaf1-7f554066c0c5-kube-api-access-7t7vd\") pod \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\" (UID: \"e62d24e6-b636-47d6-aaf1-7f554066c0c5\") " Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.101155 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-utilities" (OuterVolumeSpecName: "utilities") pod "e62d24e6-b636-47d6-aaf1-7f554066c0c5" (UID: "e62d24e6-b636-47d6-aaf1-7f554066c0c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.129049 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62d24e6-b636-47d6-aaf1-7f554066c0c5-kube-api-access-7t7vd" (OuterVolumeSpecName: "kube-api-access-7t7vd") pod "e62d24e6-b636-47d6-aaf1-7f554066c0c5" (UID: "e62d24e6-b636-47d6-aaf1-7f554066c0c5"). InnerVolumeSpecName "kube-api-access-7t7vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.158259 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e62d24e6-b636-47d6-aaf1-7f554066c0c5" (UID: "e62d24e6-b636-47d6-aaf1-7f554066c0c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.202746 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.202782 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7vd\" (UniqueName: \"kubernetes.io/projected/e62d24e6-b636-47d6-aaf1-7f554066c0c5-kube-api-access-7t7vd\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.202795 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62d24e6-b636-47d6-aaf1-7f554066c0c5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.517862 5028 generic.go:334] "Generic (PLEG): container finished" podID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerID="10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294" exitCode=0 Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.517965 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq7t4" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.518054 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerDied","Data":"10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294"} Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.518498 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq7t4" event={"ID":"e62d24e6-b636-47d6-aaf1-7f554066c0c5","Type":"ContainerDied","Data":"246deb479cb12cf79941acdad349422b53c5cb39b9b1bd9a077c7da766e4101d"} Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.518533 5028 scope.go:117] "RemoveContainer" containerID="10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.540367 5028 scope.go:117] "RemoveContainer" containerID="ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.555026 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq7t4"] Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.564842 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lq7t4"] Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.590693 5028 scope.go:117] "RemoveContainer" containerID="f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.620322 5028 scope.go:117] "RemoveContainer" containerID="10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294" Nov 23 09:01:54 crc kubenswrapper[5028]: E1123 09:01:54.620891 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294\": container with ID starting with 10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294 not found: ID does not exist" containerID="10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.620921 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294"} err="failed to get container status \"10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294\": rpc error: code = NotFound desc = could not find container \"10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294\": container with ID starting with 10c637c4b787025884eca606706ab235612a508ff3abdee3c812eb735d8c1294 not found: ID does not exist" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.620944 5028 scope.go:117] "RemoveContainer" containerID="ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3" Nov 23 09:01:54 crc kubenswrapper[5028]: E1123 09:01:54.621504 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3\": container with ID starting with ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3 not found: ID does not exist" containerID="ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.621528 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3"} err="failed to get container status \"ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3\": rpc error: code = NotFound desc = could not find container \"ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3\": container with ID starting with ab8962b4dd3e24623c429a2c04e794eb1a9b2e6f885fbd5271046bdd44a912d3 not found: ID does not exist" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.621542 5028 scope.go:117] "RemoveContainer" containerID="f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe" Nov 23 09:01:54 crc kubenswrapper[5028]: E1123 09:01:54.621842 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe\": container with ID starting with f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe not found: ID does not exist" containerID="f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe" Nov 23 09:01:54 crc kubenswrapper[5028]: I1123 09:01:54.621864 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe"} err="failed to get container status \"f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe\": rpc error: code = NotFound desc = could not find container \"f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe\": container with ID starting with f0387fcafafac43de298d64cd1897c2f23d77f1a3d8104ed21e69dfacf079fbe not found: ID does not exist" Nov 23 09:01:55 crc kubenswrapper[5028]: I1123 09:01:55.076763 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" path="/var/lib/kubelet/pods/e62d24e6-b636-47d6-aaf1-7f554066c0c5/volumes" Nov 23 09:02:00 crc kubenswrapper[5028]: I1123 09:02:00.946607 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:02:00 crc kubenswrapper[5028]: I1123 09:02:00.947520 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:02:30 crc kubenswrapper[5028]: I1123 09:02:30.947060 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:02:30 crc kubenswrapper[5028]: I1123 09:02:30.947844 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:03:00 crc kubenswrapper[5028]: I1123 09:03:00.946553 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:03:00 crc kubenswrapper[5028]: I1123 09:03:00.947743 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:03:00 crc kubenswrapper[5028]: I1123 09:03:00.947843 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:03:00 crc kubenswrapper[5028]: I1123 09:03:00.949628 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d25c2dc03a95af52955954eb895f6761916fc2acc6d21a4b1229867a82751002"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:03:00 crc kubenswrapper[5028]: I1123 09:03:00.949791 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://d25c2dc03a95af52955954eb895f6761916fc2acc6d21a4b1229867a82751002" gracePeriod=600 Nov 23 09:03:01 crc kubenswrapper[5028]: I1123 09:03:01.312527 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="d25c2dc03a95af52955954eb895f6761916fc2acc6d21a4b1229867a82751002" exitCode=0 Nov 23 09:03:01 crc kubenswrapper[5028]: I1123 09:03:01.312585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"d25c2dc03a95af52955954eb895f6761916fc2acc6d21a4b1229867a82751002"} Nov 23 09:03:01 crc kubenswrapper[5028]: I1123 09:03:01.313123 5028 scope.go:117] "RemoveContainer" containerID="64b6fde4fcc0af47caaafe64144a1190e1913719d8e717ff61a12c084dfa81b5" Nov 23 09:03:02 crc kubenswrapper[5028]: I1123 09:03:02.339223 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705"} Nov 23 09:03:28 crc kubenswrapper[5028]: I1123 09:03:28.063845 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-dwhsb"] Nov 23 09:03:28 crc kubenswrapper[5028]: I1123 09:03:28.084811 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-dwhsb"] Nov 23 09:03:28 crc kubenswrapper[5028]: I1123 09:03:28.093924 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-14aa-account-create-v6nm6"] Nov 23 09:03:28 crc kubenswrapper[5028]: I1123 09:03:28.101783 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-14aa-account-create-v6nm6"] Nov 23 09:03:29 crc kubenswrapper[5028]: I1123 09:03:29.071803 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bffb19a-ff86-48f4-b489-ccd5611101db" path="/var/lib/kubelet/pods/1bffb19a-ff86-48f4-b489-ccd5611101db/volumes" Nov 23 09:03:29 crc kubenswrapper[5028]: I1123 09:03:29.073688 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0fc5999-6ccc-4626-a5a9-456473823758" path="/var/lib/kubelet/pods/a0fc5999-6ccc-4626-a5a9-456473823758/volumes" Nov 23 09:03:31 crc kubenswrapper[5028]: I1123 09:03:31.370206 5028 scope.go:117] "RemoveContainer" containerID="c76459c1cc667d8da41f31d5f89d71329c1dcf212a95aef3603f714f0c42f709" Nov 23 09:03:31 crc kubenswrapper[5028]: I1123 09:03:31.402136 5028 scope.go:117] "RemoveContainer" containerID="049fd97de01659f99a0fd81264d960ed3123eb359727cee79b871f7e5d3dc7d7" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.751427 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fxdhv"] Nov 23 09:03:37 crc kubenswrapper[5028]: E1123 09:03:37.753072 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="registry-server" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.753096 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="registry-server" Nov 23 09:03:37 crc kubenswrapper[5028]: E1123 09:03:37.753147 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="extract-content" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.753158 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="extract-content" Nov 23 09:03:37 crc kubenswrapper[5028]: E1123 09:03:37.753213 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="extract-utilities" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.753227 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="extract-utilities" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.753564 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62d24e6-b636-47d6-aaf1-7f554066c0c5" containerName="registry-server" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.756862 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.760534 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxdhv"] Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.834715 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-utilities\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.835074 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-catalog-content\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.835103 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm5s8\" (UniqueName: \"kubernetes.io/projected/ca504214-9a8d-403f-8874-fab92ed8b14b-kube-api-access-lm5s8\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.939011 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-utilities\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.939311 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-catalog-content\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.939392 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm5s8\" (UniqueName: \"kubernetes.io/projected/ca504214-9a8d-403f-8874-fab92ed8b14b-kube-api-access-lm5s8\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.939618 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-utilities\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.939856 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-catalog-content\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:37 crc kubenswrapper[5028]: I1123 09:03:37.966691 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm5s8\" (UniqueName: \"kubernetes.io/projected/ca504214-9a8d-403f-8874-fab92ed8b14b-kube-api-access-lm5s8\") pod \"redhat-marketplace-fxdhv\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:38 crc kubenswrapper[5028]: I1123 09:03:38.089994 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:38 crc kubenswrapper[5028]: I1123 09:03:38.640181 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxdhv"] Nov 23 09:03:38 crc kubenswrapper[5028]: I1123 09:03:38.776300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerStarted","Data":"9a80938bde52534058e73f61a941ac8d227670df39046460e466d29052a9be32"} Nov 23 09:03:39 crc kubenswrapper[5028]: I1123 09:03:39.791062 5028 generic.go:334] "Generic (PLEG): container finished" podID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerID="9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4" exitCode=0 Nov 23 09:03:39 crc kubenswrapper[5028]: I1123 09:03:39.791137 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerDied","Data":"9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4"} Nov 23 09:03:40 crc kubenswrapper[5028]: I1123 09:03:40.809444 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerStarted","Data":"f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f"} Nov 23 09:03:41 crc kubenswrapper[5028]: I1123 09:03:41.828322 5028 generic.go:334] "Generic (PLEG): container finished" podID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerID="f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f" exitCode=0 Nov 23 09:03:41 crc kubenswrapper[5028]: I1123 09:03:41.828484 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerDied","Data":"f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f"} Nov 23 09:03:42 crc kubenswrapper[5028]: I1123 09:03:42.846140 5028 generic.go:334] "Generic (PLEG): container finished" podID="1da50721-0bdc-4704-9a11-99c1b786a8bc" containerID="9edaa3e7b55f85ac4484ba78ca080c718055116e378150832fc4bfa976baa4df" exitCode=0 Nov 23 09:03:42 crc kubenswrapper[5028]: I1123 09:03:42.846207 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" event={"ID":"1da50721-0bdc-4704-9a11-99c1b786a8bc","Type":"ContainerDied","Data":"9edaa3e7b55f85ac4484ba78ca080c718055116e378150832fc4bfa976baa4df"} Nov 23 09:03:42 crc kubenswrapper[5028]: I1123 09:03:42.851632 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerStarted","Data":"f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55"} Nov 23 09:03:42 crc kubenswrapper[5028]: I1123 09:03:42.927081 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fxdhv" podStartSLOduration=3.4714213369999998 podStartE2EDuration="5.927045473s" podCreationTimestamp="2025-11-23 09:03:37 +0000 UTC" firstStartedPulling="2025-11-23 09:03:39.793625608 +0000 UTC m=+8003.491030397" lastFinishedPulling="2025-11-23 09:03:42.249249714 +0000 UTC m=+8005.946654533" observedRunningTime="2025-11-23 09:03:42.903658103 +0000 UTC m=+8006.601062882" watchObservedRunningTime="2025-11-23 09:03:42.927045473 +0000 UTC m=+8006.624450292" Nov 23 09:03:43 crc kubenswrapper[5028]: I1123 09:03:43.085447 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-dcdb2"] Nov 23 09:03:43 crc kubenswrapper[5028]: I1123 09:03:43.085501 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-dcdb2"] Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.406982 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.517594 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-inventory\") pod \"1da50721-0bdc-4704-9a11-99c1b786a8bc\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.517776 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g745\" (UniqueName: \"kubernetes.io/projected/1da50721-0bdc-4704-9a11-99c1b786a8bc-kube-api-access-8g745\") pod \"1da50721-0bdc-4704-9a11-99c1b786a8bc\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.517822 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-ssh-key\") pod \"1da50721-0bdc-4704-9a11-99c1b786a8bc\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.517992 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-tripleo-cleanup-combined-ca-bundle\") pod \"1da50721-0bdc-4704-9a11-99c1b786a8bc\" (UID: \"1da50721-0bdc-4704-9a11-99c1b786a8bc\") " Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.525782 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da50721-0bdc-4704-9a11-99c1b786a8bc-kube-api-access-8g745" (OuterVolumeSpecName: "kube-api-access-8g745") pod "1da50721-0bdc-4704-9a11-99c1b786a8bc" (UID: "1da50721-0bdc-4704-9a11-99c1b786a8bc"). InnerVolumeSpecName "kube-api-access-8g745". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.533733 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "1da50721-0bdc-4704-9a11-99c1b786a8bc" (UID: "1da50721-0bdc-4704-9a11-99c1b786a8bc"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.552441 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1da50721-0bdc-4704-9a11-99c1b786a8bc" (UID: "1da50721-0bdc-4704-9a11-99c1b786a8bc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.552935 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-inventory" (OuterVolumeSpecName: "inventory") pod "1da50721-0bdc-4704-9a11-99c1b786a8bc" (UID: "1da50721-0bdc-4704-9a11-99c1b786a8bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.620844 5028 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.620886 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.620903 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g745\" (UniqueName: \"kubernetes.io/projected/1da50721-0bdc-4704-9a11-99c1b786a8bc-kube-api-access-8g745\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.620919 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1da50721-0bdc-4704-9a11-99c1b786a8bc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.877336 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" event={"ID":"1da50721-0bdc-4704-9a11-99c1b786a8bc","Type":"ContainerDied","Data":"dc37e5559042f310e456224418add12df2f39fabc9788aef9eab866689117ba7"} Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.877403 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc37e5559042f310e456224418add12df2f39fabc9788aef9eab866689117ba7" Nov 23 09:03:44 crc kubenswrapper[5028]: I1123 09:03:44.877398 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4" Nov 23 09:03:45 crc kubenswrapper[5028]: I1123 09:03:45.098259 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c15f0f-729e-4d85-9357-570386dd2486" path="/var/lib/kubelet/pods/38c15f0f-729e-4d85-9357-570386dd2486/volumes" Nov 23 09:03:48 crc kubenswrapper[5028]: I1123 09:03:48.090603 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:48 crc kubenswrapper[5028]: I1123 09:03:48.091029 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:48 crc kubenswrapper[5028]: I1123 09:03:48.165843 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:49 crc kubenswrapper[5028]: I1123 09:03:49.025847 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:49 crc kubenswrapper[5028]: I1123 09:03:49.106417 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxdhv"] Nov 23 09:03:50 crc kubenswrapper[5028]: I1123 09:03:50.944227 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fxdhv" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="registry-server" containerID="cri-o://f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55" gracePeriod=2 Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.534109 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.606808 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-utilities\") pod \"ca504214-9a8d-403f-8874-fab92ed8b14b\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.607209 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm5s8\" (UniqueName: \"kubernetes.io/projected/ca504214-9a8d-403f-8874-fab92ed8b14b-kube-api-access-lm5s8\") pod \"ca504214-9a8d-403f-8874-fab92ed8b14b\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.607395 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-catalog-content\") pod \"ca504214-9a8d-403f-8874-fab92ed8b14b\" (UID: \"ca504214-9a8d-403f-8874-fab92ed8b14b\") " Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.609557 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-utilities" (OuterVolumeSpecName: "utilities") pod "ca504214-9a8d-403f-8874-fab92ed8b14b" (UID: "ca504214-9a8d-403f-8874-fab92ed8b14b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.615674 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca504214-9a8d-403f-8874-fab92ed8b14b-kube-api-access-lm5s8" (OuterVolumeSpecName: "kube-api-access-lm5s8") pod "ca504214-9a8d-403f-8874-fab92ed8b14b" (UID: "ca504214-9a8d-403f-8874-fab92ed8b14b"). InnerVolumeSpecName "kube-api-access-lm5s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.630457 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca504214-9a8d-403f-8874-fab92ed8b14b" (UID: "ca504214-9a8d-403f-8874-fab92ed8b14b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.711017 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm5s8\" (UniqueName: \"kubernetes.io/projected/ca504214-9a8d-403f-8874-fab92ed8b14b-kube-api-access-lm5s8\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.711278 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.711371 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca504214-9a8d-403f-8874-fab92ed8b14b-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.961317 5028 generic.go:334] "Generic (PLEG): container finished" podID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerID="f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55" exitCode=0 Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.961403 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxdhv" Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.961406 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerDied","Data":"f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55"} Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.961557 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxdhv" event={"ID":"ca504214-9a8d-403f-8874-fab92ed8b14b","Type":"ContainerDied","Data":"9a80938bde52534058e73f61a941ac8d227670df39046460e466d29052a9be32"} Nov 23 09:03:51 crc kubenswrapper[5028]: I1123 09:03:51.961636 5028 scope.go:117] "RemoveContainer" containerID="f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.007213 5028 scope.go:117] "RemoveContainer" containerID="f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.018754 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxdhv"] Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.037403 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxdhv"] Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.041584 5028 scope.go:117] "RemoveContainer" containerID="9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.101142 5028 scope.go:117] "RemoveContainer" containerID="f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55" Nov 23 09:03:52 crc kubenswrapper[5028]: E1123 09:03:52.102307 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55\": container with ID starting with f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55 not found: ID does not exist" containerID="f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.102355 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55"} err="failed to get container status \"f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55\": rpc error: code = NotFound desc = could not find container \"f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55\": container with ID starting with f88839a31a49412e9a356afb21bbe67c9f5b8168112e931f93e76b38ce419e55 not found: ID does not exist" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.102401 5028 scope.go:117] "RemoveContainer" containerID="f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f" Nov 23 09:03:52 crc kubenswrapper[5028]: E1123 09:03:52.103053 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f\": container with ID starting with f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f not found: ID does not exist" containerID="f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.103101 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f"} err="failed to get container status \"f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f\": rpc error: code = NotFound desc = could not find container \"f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f\": container with ID starting with f6433f6f919923185fccba7d8e003c248280750480145a35780d423487b3925f not found: ID does not exist" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.103130 5028 scope.go:117] "RemoveContainer" containerID="9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4" Nov 23 09:03:52 crc kubenswrapper[5028]: E1123 09:03:52.103473 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4\": container with ID starting with 9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4 not found: ID does not exist" containerID="9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4" Nov 23 09:03:52 crc kubenswrapper[5028]: I1123 09:03:52.103501 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4"} err="failed to get container status \"9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4\": rpc error: code = NotFound desc = could not find container \"9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4\": container with ID starting with 9dedd6301f064127977a422ac3b6a7ebf2e774bf678708dc3c0c14a70229d9d4 not found: ID does not exist" Nov 23 09:03:53 crc kubenswrapper[5028]: I1123 09:03:53.074490 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" path="/var/lib/kubelet/pods/ca504214-9a8d-403f-8874-fab92ed8b14b/volumes" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.359221 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mg4qd"] Nov 23 09:04:19 crc kubenswrapper[5028]: E1123 09:04:19.360514 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="extract-utilities" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.360537 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="extract-utilities" Nov 23 09:04:19 crc kubenswrapper[5028]: E1123 09:04:19.360558 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da50721-0bdc-4704-9a11-99c1b786a8bc" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.360569 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da50721-0bdc-4704-9a11-99c1b786a8bc" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Nov 23 09:04:19 crc kubenswrapper[5028]: E1123 09:04:19.360587 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="registry-server" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.360595 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="registry-server" Nov 23 09:04:19 crc kubenswrapper[5028]: E1123 09:04:19.360654 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="extract-content" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.360667 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="extract-content" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.360914 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da50721-0bdc-4704-9a11-99c1b786a8bc" containerName="tripleo-cleanup-tripleo-cleanup-openstack-networker" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.360931 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca504214-9a8d-403f-8874-fab92ed8b14b" containerName="registry-server" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.363180 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.372061 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mg4qd"] Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.451503 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qhkg\" (UniqueName: \"kubernetes.io/projected/b574e81f-2e72-40a1-b9c2-da0eb7078654-kube-api-access-4qhkg\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.451646 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-catalog-content\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.451772 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-utilities\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.553923 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-utilities\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.554112 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qhkg\" (UniqueName: \"kubernetes.io/projected/b574e81f-2e72-40a1-b9c2-da0eb7078654-kube-api-access-4qhkg\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.554243 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-catalog-content\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.554502 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-utilities\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.554601 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-catalog-content\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.582981 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qhkg\" (UniqueName: \"kubernetes.io/projected/b574e81f-2e72-40a1-b9c2-da0eb7078654-kube-api-access-4qhkg\") pod \"community-operators-mg4qd\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:19 crc kubenswrapper[5028]: I1123 09:04:19.688183 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:20 crc kubenswrapper[5028]: I1123 09:04:20.220114 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mg4qd"] Nov 23 09:04:20 crc kubenswrapper[5028]: I1123 09:04:20.344637 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerStarted","Data":"56b7f614bb19d1ce56304d2ce7737b20100e50c3d435908f1798b497c981e20e"} Nov 23 09:04:21 crc kubenswrapper[5028]: I1123 09:04:21.357351 5028 generic.go:334] "Generic (PLEG): container finished" podID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerID="1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37" exitCode=0 Nov 23 09:04:21 crc kubenswrapper[5028]: I1123 09:04:21.357484 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerDied","Data":"1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37"} Nov 23 09:04:22 crc kubenswrapper[5028]: I1123 09:04:22.370438 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerStarted","Data":"6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135"} Nov 23 09:04:24 crc kubenswrapper[5028]: I1123 09:04:24.394053 5028 generic.go:334] "Generic (PLEG): container finished" podID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerID="6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135" exitCode=0 Nov 23 09:04:24 crc kubenswrapper[5028]: I1123 09:04:24.394121 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerDied","Data":"6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135"} Nov 23 09:04:25 crc kubenswrapper[5028]: I1123 09:04:25.405345 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerStarted","Data":"525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529"} Nov 23 09:04:25 crc kubenswrapper[5028]: I1123 09:04:25.428052 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mg4qd" podStartSLOduration=2.924088856 podStartE2EDuration="6.428027316s" podCreationTimestamp="2025-11-23 09:04:19 +0000 UTC" firstStartedPulling="2025-11-23 09:04:21.359790423 +0000 UTC m=+8045.057195202" lastFinishedPulling="2025-11-23 09:04:24.863728883 +0000 UTC m=+8048.561133662" observedRunningTime="2025-11-23 09:04:25.42213864 +0000 UTC m=+8049.119543439" watchObservedRunningTime="2025-11-23 09:04:25.428027316 +0000 UTC m=+8049.125432095" Nov 23 09:04:29 crc kubenswrapper[5028]: I1123 09:04:29.688884 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:29 crc kubenswrapper[5028]: I1123 09:04:29.692091 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:29 crc kubenswrapper[5028]: I1123 09:04:29.748220 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:30 crc kubenswrapper[5028]: I1123 09:04:30.540280 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:30 crc kubenswrapper[5028]: I1123 09:04:30.600849 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mg4qd"] Nov 23 09:04:31 crc kubenswrapper[5028]: I1123 09:04:31.529541 5028 scope.go:117] "RemoveContainer" containerID="85ab6d2554b14739ddd50d078da884f4d100a1e70fab0322e53fd7a6b3da9d6e" Nov 23 09:04:32 crc kubenswrapper[5028]: I1123 09:04:32.508339 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mg4qd" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="registry-server" containerID="cri-o://525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529" gracePeriod=2 Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.078498 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.228059 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-catalog-content\") pod \"b574e81f-2e72-40a1-b9c2-da0eb7078654\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.228191 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qhkg\" (UniqueName: \"kubernetes.io/projected/b574e81f-2e72-40a1-b9c2-da0eb7078654-kube-api-access-4qhkg\") pod \"b574e81f-2e72-40a1-b9c2-da0eb7078654\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.228336 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-utilities\") pod \"b574e81f-2e72-40a1-b9c2-da0eb7078654\" (UID: \"b574e81f-2e72-40a1-b9c2-da0eb7078654\") " Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.229850 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-utilities" (OuterVolumeSpecName: "utilities") pod "b574e81f-2e72-40a1-b9c2-da0eb7078654" (UID: "b574e81f-2e72-40a1-b9c2-da0eb7078654"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.235486 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b574e81f-2e72-40a1-b9c2-da0eb7078654-kube-api-access-4qhkg" (OuterVolumeSpecName: "kube-api-access-4qhkg") pod "b574e81f-2e72-40a1-b9c2-da0eb7078654" (UID: "b574e81f-2e72-40a1-b9c2-da0eb7078654"). InnerVolumeSpecName "kube-api-access-4qhkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.282110 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b574e81f-2e72-40a1-b9c2-da0eb7078654" (UID: "b574e81f-2e72-40a1-b9c2-da0eb7078654"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.331258 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.331536 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qhkg\" (UniqueName: \"kubernetes.io/projected/b574e81f-2e72-40a1-b9c2-da0eb7078654-kube-api-access-4qhkg\") on node \"crc\" DevicePath \"\"" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.331615 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b574e81f-2e72-40a1-b9c2-da0eb7078654-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.521516 5028 generic.go:334] "Generic (PLEG): container finished" podID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerID="525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529" exitCode=0 Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.521575 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mg4qd" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.521611 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerDied","Data":"525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529"} Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.522016 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mg4qd" event={"ID":"b574e81f-2e72-40a1-b9c2-da0eb7078654","Type":"ContainerDied","Data":"56b7f614bb19d1ce56304d2ce7737b20100e50c3d435908f1798b497c981e20e"} Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.522047 5028 scope.go:117] "RemoveContainer" containerID="525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.544145 5028 scope.go:117] "RemoveContainer" containerID="6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.560478 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mg4qd"] Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.569273 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mg4qd"] Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.580121 5028 scope.go:117] "RemoveContainer" containerID="1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.635425 5028 scope.go:117] "RemoveContainer" containerID="525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529" Nov 23 09:04:33 crc kubenswrapper[5028]: E1123 09:04:33.636148 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529\": container with ID starting with 525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529 not found: ID does not exist" containerID="525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.636213 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529"} err="failed to get container status \"525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529\": rpc error: code = NotFound desc = could not find container \"525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529\": container with ID starting with 525b5d9df4b13498869198a71194c6e18e6d922c9368a2a01d2c2266feaad529 not found: ID does not exist" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.636251 5028 scope.go:117] "RemoveContainer" containerID="6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135" Nov 23 09:04:33 crc kubenswrapper[5028]: E1123 09:04:33.636702 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135\": container with ID starting with 6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135 not found: ID does not exist" containerID="6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.636762 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135"} err="failed to get container status \"6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135\": rpc error: code = NotFound desc = could not find container \"6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135\": container with ID starting with 6ca2b44f7bfa8ae75ffd0f03f634cb1b28ec42b2a5e6f5530c95cddce4477135 not found: ID does not exist" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.636804 5028 scope.go:117] "RemoveContainer" containerID="1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37" Nov 23 09:04:33 crc kubenswrapper[5028]: E1123 09:04:33.637301 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37\": container with ID starting with 1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37 not found: ID does not exist" containerID="1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37" Nov 23 09:04:33 crc kubenswrapper[5028]: I1123 09:04:33.637331 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37"} err="failed to get container status \"1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37\": rpc error: code = NotFound desc = could not find container \"1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37\": container with ID starting with 1f6f45b594c97f88d71a774fd0c04e2e2bfc9560a74482290e0bff8c37272a37 not found: ID does not exist" Nov 23 09:04:35 crc kubenswrapper[5028]: I1123 09:04:35.072933 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" path="/var/lib/kubelet/pods/b574e81f-2e72-40a1-b9c2-da0eb7078654/volumes" Nov 23 09:05:30 crc kubenswrapper[5028]: I1123 09:05:30.946360 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:05:30 crc kubenswrapper[5028]: I1123 09:05:30.947319 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:05:57 crc kubenswrapper[5028]: I1123 09:05:57.087144 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-gz528"] Nov 23 09:05:57 crc kubenswrapper[5028]: I1123 09:05:57.101667 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-gz528"] Nov 23 09:05:58 crc kubenswrapper[5028]: I1123 09:05:58.040488 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-50af-account-create-gts6s"] Nov 23 09:05:58 crc kubenswrapper[5028]: I1123 09:05:58.049010 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-50af-account-create-gts6s"] Nov 23 09:05:59 crc kubenswrapper[5028]: I1123 09:05:59.075587 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="653c3e93-a5bd-4421-9e67-196a1bec03b4" path="/var/lib/kubelet/pods/653c3e93-a5bd-4421-9e67-196a1bec03b4/volumes" Nov 23 09:05:59 crc kubenswrapper[5028]: I1123 09:05:59.076942 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19b1975-bb33-46c3-84d0-ad7540831dae" path="/var/lib/kubelet/pods/a19b1975-bb33-46c3-84d0-ad7540831dae/volumes" Nov 23 09:06:00 crc kubenswrapper[5028]: I1123 09:06:00.947113 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:06:00 crc kubenswrapper[5028]: I1123 09:06:00.947729 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:06:09 crc kubenswrapper[5028]: I1123 09:06:09.042329 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-jz2dc"] Nov 23 09:06:09 crc kubenswrapper[5028]: I1123 09:06:09.086990 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-jz2dc"] Nov 23 09:06:11 crc kubenswrapper[5028]: I1123 09:06:11.067462 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d661c439-4a78-4492-b01d-d4f3bc755e8b" path="/var/lib/kubelet/pods/d661c439-4a78-4492-b01d-d4f3bc755e8b/volumes" Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.056505 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-9fc2-account-create-vz4kk"] Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.073358 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-9fc2-account-create-vz4kk"] Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.946495 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.946891 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.946994 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.948557 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:06:30 crc kubenswrapper[5028]: I1123 09:06:30.948677 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" gracePeriod=600 Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.035818 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-hnsp5"] Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.044712 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-hnsp5"] Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.067170 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf27fcd2-e3a8-42ca-aab7-d990b6178677" path="/var/lib/kubelet/pods/cf27fcd2-e3a8-42ca-aab7-d990b6178677/volumes" Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.067932 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d38fb54a-51b5-4733-8437-2cc1398a5938" path="/var/lib/kubelet/pods/d38fb54a-51b5-4733-8437-2cc1398a5938/volumes" Nov 23 09:06:31 crc kubenswrapper[5028]: E1123 09:06:31.083235 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.664801 5028 scope.go:117] "RemoveContainer" containerID="804e8a54a42640c946f206b04c16754adf4f349e8b224b1192a5c9d41ab9556a" Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.692025 5028 scope.go:117] "RemoveContainer" containerID="9b6c7d83e69ea31626205c72e1cfa2372063da0de7c38758ac01a1e163a882a9" Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.743546 5028 scope.go:117] "RemoveContainer" containerID="704089c37b0dfcf36a6eb88fc3f25c7dd4722b20e681f3c400b009c91a9015c0" Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.789066 5028 scope.go:117] "RemoveContainer" containerID="c36ff0185ef9c46e25ea1bd28530d655497cae111dbf09419cc01846e4abdf4c" Nov 23 09:06:31 crc kubenswrapper[5028]: I1123 09:06:31.862014 5028 scope.go:117] "RemoveContainer" containerID="f2d2b0e17716864f5361cad16a81ff5f84c1e4914427cd4d0bb85040272cb66a" Nov 23 09:06:32 crc kubenswrapper[5028]: I1123 09:06:32.060692 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" exitCode=0 Nov 23 09:06:32 crc kubenswrapper[5028]: I1123 09:06:32.060739 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705"} Nov 23 09:06:32 crc kubenswrapper[5028]: I1123 09:06:32.061201 5028 scope.go:117] "RemoveContainer" containerID="d25c2dc03a95af52955954eb895f6761916fc2acc6d21a4b1229867a82751002" Nov 23 09:06:32 crc kubenswrapper[5028]: I1123 09:06:32.062047 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:06:32 crc kubenswrapper[5028]: E1123 09:06:32.062482 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:06:44 crc kubenswrapper[5028]: I1123 09:06:44.054086 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:06:44 crc kubenswrapper[5028]: E1123 09:06:44.055584 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:06:48 crc kubenswrapper[5028]: I1123 09:06:48.044654 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-lxqwl"] Nov 23 09:06:48 crc kubenswrapper[5028]: I1123 09:06:48.054235 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-lxqwl"] Nov 23 09:06:49 crc kubenswrapper[5028]: I1123 09:06:49.070685 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a7d9f4-c4f1-4be6-9092-b598185c1fda" path="/var/lib/kubelet/pods/27a7d9f4-c4f1-4be6-9092-b598185c1fda/volumes" Nov 23 09:06:59 crc kubenswrapper[5028]: I1123 09:06:59.054502 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:06:59 crc kubenswrapper[5028]: E1123 09:06:59.055662 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.824081 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sbpqf"] Nov 23 09:07:00 crc kubenswrapper[5028]: E1123 09:07:00.825130 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="extract-content" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.825149 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="extract-content" Nov 23 09:07:00 crc kubenswrapper[5028]: E1123 09:07:00.825167 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="extract-utilities" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.825176 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="extract-utilities" Nov 23 09:07:00 crc kubenswrapper[5028]: E1123 09:07:00.825194 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="registry-server" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.825207 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="registry-server" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.825475 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="b574e81f-2e72-40a1-b9c2-da0eb7078654" containerName="registry-server" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.827585 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.864394 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbpqf"] Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.874569 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pxm7\" (UniqueName: \"kubernetes.io/projected/aa092194-3642-496a-8a42-502d4394e522-kube-api-access-6pxm7\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.874722 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-utilities\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.874762 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-catalog-content\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.977307 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pxm7\" (UniqueName: \"kubernetes.io/projected/aa092194-3642-496a-8a42-502d4394e522-kube-api-access-6pxm7\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.977453 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-utilities\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.977497 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-catalog-content\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.978128 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-utilities\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:00 crc kubenswrapper[5028]: I1123 09:07:00.978205 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-catalog-content\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:01 crc kubenswrapper[5028]: I1123 09:07:01.003003 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pxm7\" (UniqueName: \"kubernetes.io/projected/aa092194-3642-496a-8a42-502d4394e522-kube-api-access-6pxm7\") pod \"redhat-operators-sbpqf\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:01 crc kubenswrapper[5028]: I1123 09:07:01.171852 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:01 crc kubenswrapper[5028]: I1123 09:07:01.729293 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sbpqf"] Nov 23 09:07:02 crc kubenswrapper[5028]: I1123 09:07:02.483157 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa092194-3642-496a-8a42-502d4394e522" containerID="49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f" exitCode=0 Nov 23 09:07:02 crc kubenswrapper[5028]: I1123 09:07:02.483641 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerDied","Data":"49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f"} Nov 23 09:07:02 crc kubenswrapper[5028]: I1123 09:07:02.483674 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerStarted","Data":"57d5d12ded24302b9ca7d620bce92368da7956edb212a41b769f407a1fb494d2"} Nov 23 09:07:02 crc kubenswrapper[5028]: I1123 09:07:02.486719 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:07:03 crc kubenswrapper[5028]: I1123 09:07:03.503302 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerStarted","Data":"758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec"} Nov 23 09:07:08 crc kubenswrapper[5028]: I1123 09:07:08.579082 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa092194-3642-496a-8a42-502d4394e522" containerID="758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec" exitCode=0 Nov 23 09:07:08 crc kubenswrapper[5028]: I1123 09:07:08.579175 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerDied","Data":"758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec"} Nov 23 09:07:09 crc kubenswrapper[5028]: I1123 09:07:09.592834 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerStarted","Data":"a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb"} Nov 23 09:07:09 crc kubenswrapper[5028]: I1123 09:07:09.627546 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sbpqf" podStartSLOduration=3.163305923 podStartE2EDuration="9.627522272s" podCreationTimestamp="2025-11-23 09:07:00 +0000 UTC" firstStartedPulling="2025-11-23 09:07:02.486484657 +0000 UTC m=+8206.183889436" lastFinishedPulling="2025-11-23 09:07:08.950701006 +0000 UTC m=+8212.648105785" observedRunningTime="2025-11-23 09:07:09.618379115 +0000 UTC m=+8213.315783894" watchObservedRunningTime="2025-11-23 09:07:09.627522272 +0000 UTC m=+8213.324927051" Nov 23 09:07:11 crc kubenswrapper[5028]: I1123 09:07:11.174163 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:11 crc kubenswrapper[5028]: I1123 09:07:11.174677 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:11 crc kubenswrapper[5028]: I1123 09:07:11.614807 5028 generic.go:334] "Generic (PLEG): container finished" podID="c3203607-0919-4770-9464-326d5b95d8ad" containerID="9bc52ecbd6786e9e8dffab98fe74b05df5ae9bed08b1831d8af348a6eb88d615" exitCode=0 Nov 23 09:07:11 crc kubenswrapper[5028]: I1123 09:07:11.614879 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" event={"ID":"c3203607-0919-4770-9464-326d5b95d8ad","Type":"ContainerDied","Data":"9bc52ecbd6786e9e8dffab98fe74b05df5ae9bed08b1831d8af348a6eb88d615"} Nov 23 09:07:12 crc kubenswrapper[5028]: I1123 09:07:12.053214 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:07:12 crc kubenswrapper[5028]: E1123 09:07:12.053511 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:07:12 crc kubenswrapper[5028]: I1123 09:07:12.224549 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbpqf" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="registry-server" probeResult="failure" output=< Nov 23 09:07:12 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:07:12 crc kubenswrapper[5028]: > Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.176073 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.307708 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-inventory\") pod \"c3203607-0919-4770-9464-326d5b95d8ad\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.307938 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ssh-key\") pod \"c3203607-0919-4770-9464-326d5b95d8ad\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.308131 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqk8d\" (UniqueName: \"kubernetes.io/projected/c3203607-0919-4770-9464-326d5b95d8ad-kube-api-access-bqk8d\") pod \"c3203607-0919-4770-9464-326d5b95d8ad\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.308187 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ceph\") pod \"c3203607-0919-4770-9464-326d5b95d8ad\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.308311 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-tripleo-cleanup-combined-ca-bundle\") pod \"c3203607-0919-4770-9464-326d5b95d8ad\" (UID: \"c3203607-0919-4770-9464-326d5b95d8ad\") " Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.345835 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "c3203607-0919-4770-9464-326d5b95d8ad" (UID: "c3203607-0919-4770-9464-326d5b95d8ad"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.345907 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3203607-0919-4770-9464-326d5b95d8ad-kube-api-access-bqk8d" (OuterVolumeSpecName: "kube-api-access-bqk8d") pod "c3203607-0919-4770-9464-326d5b95d8ad" (UID: "c3203607-0919-4770-9464-326d5b95d8ad"). InnerVolumeSpecName "kube-api-access-bqk8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.354972 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ceph" (OuterVolumeSpecName: "ceph") pod "c3203607-0919-4770-9464-326d5b95d8ad" (UID: "c3203607-0919-4770-9464-326d5b95d8ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.366832 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c3203607-0919-4770-9464-326d5b95d8ad" (UID: "c3203607-0919-4770-9464-326d5b95d8ad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.377007 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-inventory" (OuterVolumeSpecName: "inventory") pod "c3203607-0919-4770-9464-326d5b95d8ad" (UID: "c3203607-0919-4770-9464-326d5b95d8ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.412073 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.412112 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.412124 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqk8d\" (UniqueName: \"kubernetes.io/projected/c3203607-0919-4770-9464-326d5b95d8ad-kube-api-access-bqk8d\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.412136 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.412148 5028 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3203607-0919-4770-9464-326d5b95d8ad-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.637366 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" event={"ID":"c3203607-0919-4770-9464-326d5b95d8ad","Type":"ContainerDied","Data":"5789270d3c46942d5eec58aeb6266334e649b3a2acad9a0e3d851ce9283fad51"} Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.637421 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5789270d3c46942d5eec58aeb6266334e649b3a2acad9a0e3d851ce9283fad51" Nov 23 09:07:13 crc kubenswrapper[5028]: I1123 09:07:13.637583 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58" Nov 23 09:07:22 crc kubenswrapper[5028]: I1123 09:07:22.230255 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sbpqf" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="registry-server" probeResult="failure" output=< Nov 23 09:07:22 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:07:22 crc kubenswrapper[5028]: > Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.220221 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-zrj2l"] Nov 23 09:07:23 crc kubenswrapper[5028]: E1123 09:07:23.221138 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3203607-0919-4770-9464-326d5b95d8ad" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.221169 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3203607-0919-4770-9464-326d5b95d8ad" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.221461 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3203607-0919-4770-9464-326d5b95d8ad" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.222768 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.226235 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.226559 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.228625 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.228802 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.231149 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-9vfht"] Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.233713 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.235531 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.239847 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.252218 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-zrj2l"] Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.267558 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-9vfht"] Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.364894 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365006 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365064 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-ssh-key\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365229 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365268 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsd9f\" (UniqueName: \"kubernetes.io/projected/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-kube-api-access-vsd9f\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365490 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ceph\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365672 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-inventory\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365724 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svp8f\" (UniqueName: \"kubernetes.io/projected/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-kube-api-access-svp8f\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.365764 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-inventory\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.467779 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-ssh-key\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.467922 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.467969 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsd9f\" (UniqueName: \"kubernetes.io/projected/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-kube-api-access-vsd9f\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.468015 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ceph\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.468048 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-inventory\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.468066 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svp8f\" (UniqueName: \"kubernetes.io/projected/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-kube-api-access-svp8f\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.468893 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-inventory\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.468985 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.469084 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.475766 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-inventory\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.476778 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-ssh-key\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.477201 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.477224 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.478853 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ceph\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.480300 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-inventory\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.486730 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.489718 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsd9f\" (UniqueName: \"kubernetes.io/projected/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-kube-api-access-vsd9f\") pod \"bootstrap-openstack-openstack-networker-zrj2l\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.500421 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svp8f\" (UniqueName: \"kubernetes.io/projected/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-kube-api-access-svp8f\") pod \"bootstrap-openstack-openstack-cell1-9vfht\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.548427 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:07:23 crc kubenswrapper[5028]: I1123 09:07:23.562726 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:07:24 crc kubenswrapper[5028]: I1123 09:07:24.054217 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:07:24 crc kubenswrapper[5028]: E1123 09:07:24.055446 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:07:24 crc kubenswrapper[5028]: I1123 09:07:24.157730 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-networker-zrj2l"] Nov 23 09:07:24 crc kubenswrapper[5028]: W1123 09:07:24.282979 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa4a160_aa17_4390_aa9a_8f2fba7c9836.slice/crio-c1124f5f74986d2bee04e603db862fbc757c44209089c81dc997807d0ec821ac WatchSource:0}: Error finding container c1124f5f74986d2bee04e603db862fbc757c44209089c81dc997807d0ec821ac: Status 404 returned error can't find the container with id c1124f5f74986d2bee04e603db862fbc757c44209089c81dc997807d0ec821ac Nov 23 09:07:24 crc kubenswrapper[5028]: I1123 09:07:24.287217 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-9vfht"] Nov 23 09:07:24 crc kubenswrapper[5028]: I1123 09:07:24.759198 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" event={"ID":"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5","Type":"ContainerStarted","Data":"f9796eab223b4853c89407920c24ebe3ebc8484fde5288ee843389f780de8a67"} Nov 23 09:07:24 crc kubenswrapper[5028]: I1123 09:07:24.762382 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" event={"ID":"0fa4a160-aa17-4390-aa9a-8f2fba7c9836","Type":"ContainerStarted","Data":"c1124f5f74986d2bee04e603db862fbc757c44209089c81dc997807d0ec821ac"} Nov 23 09:07:25 crc kubenswrapper[5028]: I1123 09:07:25.775886 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" event={"ID":"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5","Type":"ContainerStarted","Data":"4e0c96106ab0ac3fb5cf6f7892c515a6e15957e3fd254d330f8de7019b88b0da"} Nov 23 09:07:25 crc kubenswrapper[5028]: I1123 09:07:25.779748 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" event={"ID":"0fa4a160-aa17-4390-aa9a-8f2fba7c9836","Type":"ContainerStarted","Data":"6119975e9d0599f39081345a10de3d75380e8dfd1730cf55ffdd4748a1fdee1c"} Nov 23 09:07:25 crc kubenswrapper[5028]: I1123 09:07:25.796743 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" podStartSLOduration=2.3258968380000002 podStartE2EDuration="2.796715371s" podCreationTimestamp="2025-11-23 09:07:23 +0000 UTC" firstStartedPulling="2025-11-23 09:07:24.175124951 +0000 UTC m=+8227.872529730" lastFinishedPulling="2025-11-23 09:07:24.645943484 +0000 UTC m=+8228.343348263" observedRunningTime="2025-11-23 09:07:25.79384551 +0000 UTC m=+8229.491250309" watchObservedRunningTime="2025-11-23 09:07:25.796715371 +0000 UTC m=+8229.494120160" Nov 23 09:07:25 crc kubenswrapper[5028]: I1123 09:07:25.820618 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" podStartSLOduration=2.378994695 podStartE2EDuration="2.820596303s" podCreationTimestamp="2025-11-23 09:07:23 +0000 UTC" firstStartedPulling="2025-11-23 09:07:24.288297129 +0000 UTC m=+8227.985701908" lastFinishedPulling="2025-11-23 09:07:24.729898737 +0000 UTC m=+8228.427303516" observedRunningTime="2025-11-23 09:07:25.813568359 +0000 UTC m=+8229.510973138" watchObservedRunningTime="2025-11-23 09:07:25.820596303 +0000 UTC m=+8229.518001082" Nov 23 09:07:31 crc kubenswrapper[5028]: I1123 09:07:31.227407 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:31 crc kubenswrapper[5028]: I1123 09:07:31.299368 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:32 crc kubenswrapper[5028]: I1123 09:07:32.030215 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbpqf"] Nov 23 09:07:32 crc kubenswrapper[5028]: I1123 09:07:32.037553 5028 scope.go:117] "RemoveContainer" containerID="757470b6a71b9b5c7a430871df2a3f986af2b675106b7504e1069c2b8ec03c34" Nov 23 09:07:32 crc kubenswrapper[5028]: I1123 09:07:32.856931 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sbpqf" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="registry-server" containerID="cri-o://a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb" gracePeriod=2 Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.369226 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.519638 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-utilities\") pod \"aa092194-3642-496a-8a42-502d4394e522\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.519772 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pxm7\" (UniqueName: \"kubernetes.io/projected/aa092194-3642-496a-8a42-502d4394e522-kube-api-access-6pxm7\") pod \"aa092194-3642-496a-8a42-502d4394e522\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.519820 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-catalog-content\") pod \"aa092194-3642-496a-8a42-502d4394e522\" (UID: \"aa092194-3642-496a-8a42-502d4394e522\") " Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.520494 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-utilities" (OuterVolumeSpecName: "utilities") pod "aa092194-3642-496a-8a42-502d4394e522" (UID: "aa092194-3642-496a-8a42-502d4394e522"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.525400 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa092194-3642-496a-8a42-502d4394e522-kube-api-access-6pxm7" (OuterVolumeSpecName: "kube-api-access-6pxm7") pod "aa092194-3642-496a-8a42-502d4394e522" (UID: "aa092194-3642-496a-8a42-502d4394e522"). InnerVolumeSpecName "kube-api-access-6pxm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.610436 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa092194-3642-496a-8a42-502d4394e522" (UID: "aa092194-3642-496a-8a42-502d4394e522"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.622282 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.622322 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pxm7\" (UniqueName: \"kubernetes.io/projected/aa092194-3642-496a-8a42-502d4394e522-kube-api-access-6pxm7\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.622339 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa092194-3642-496a-8a42-502d4394e522-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.870515 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa092194-3642-496a-8a42-502d4394e522" containerID="a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb" exitCode=0 Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.870570 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerDied","Data":"a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb"} Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.870602 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sbpqf" event={"ID":"aa092194-3642-496a-8a42-502d4394e522","Type":"ContainerDied","Data":"57d5d12ded24302b9ca7d620bce92368da7956edb212a41b769f407a1fb494d2"} Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.870630 5028 scope.go:117] "RemoveContainer" containerID="a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.870650 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sbpqf" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.892578 5028 scope.go:117] "RemoveContainer" containerID="758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.923196 5028 scope.go:117] "RemoveContainer" containerID="49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.929917 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sbpqf"] Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.939094 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sbpqf"] Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.979200 5028 scope.go:117] "RemoveContainer" containerID="a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb" Nov 23 09:07:33 crc kubenswrapper[5028]: E1123 09:07:33.979848 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb\": container with ID starting with a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb not found: ID does not exist" containerID="a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.980078 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb"} err="failed to get container status \"a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb\": rpc error: code = NotFound desc = could not find container \"a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb\": container with ID starting with a79b2d61a70342eda4af495254d274f64a0b6dbd97d71ef38b6542f19d7b9deb not found: ID does not exist" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.980144 5028 scope.go:117] "RemoveContainer" containerID="758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec" Nov 23 09:07:33 crc kubenswrapper[5028]: E1123 09:07:33.980820 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec\": container with ID starting with 758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec not found: ID does not exist" containerID="758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.980995 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec"} err="failed to get container status \"758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec\": rpc error: code = NotFound desc = could not find container \"758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec\": container with ID starting with 758db74417087311dd9006d1756de096179eae58a0b2f9ee1ca8710ca95bb5ec not found: ID does not exist" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.981145 5028 scope.go:117] "RemoveContainer" containerID="49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f" Nov 23 09:07:33 crc kubenswrapper[5028]: E1123 09:07:33.981721 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f\": container with ID starting with 49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f not found: ID does not exist" containerID="49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f" Nov 23 09:07:33 crc kubenswrapper[5028]: I1123 09:07:33.981756 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f"} err="failed to get container status \"49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f\": rpc error: code = NotFound desc = could not find container \"49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f\": container with ID starting with 49d493545a9f516982e7d66ddfa890919b4bcf282373daca604b479d3b7dcd0f not found: ID does not exist" Nov 23 09:07:35 crc kubenswrapper[5028]: I1123 09:07:35.067666 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa092194-3642-496a-8a42-502d4394e522" path="/var/lib/kubelet/pods/aa092194-3642-496a-8a42-502d4394e522/volumes" Nov 23 09:07:37 crc kubenswrapper[5028]: I1123 09:07:37.061894 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:07:37 crc kubenswrapper[5028]: E1123 09:07:37.062214 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:07:52 crc kubenswrapper[5028]: I1123 09:07:52.053419 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:07:52 crc kubenswrapper[5028]: E1123 09:07:52.056358 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:08:03 crc kubenswrapper[5028]: I1123 09:08:03.053089 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:08:03 crc kubenswrapper[5028]: E1123 09:08:03.054001 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:08:18 crc kubenswrapper[5028]: I1123 09:08:18.055850 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:08:18 crc kubenswrapper[5028]: E1123 09:08:18.057033 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:08:29 crc kubenswrapper[5028]: I1123 09:08:29.055035 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:08:29 crc kubenswrapper[5028]: E1123 09:08:29.056631 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:08:43 crc kubenswrapper[5028]: I1123 09:08:43.055061 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:08:43 crc kubenswrapper[5028]: E1123 09:08:43.056383 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:08:55 crc kubenswrapper[5028]: I1123 09:08:55.054459 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:08:55 crc kubenswrapper[5028]: E1123 09:08:55.055714 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:09:07 crc kubenswrapper[5028]: I1123 09:09:07.063517 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:09:07 crc kubenswrapper[5028]: E1123 09:09:07.064804 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:09:22 crc kubenswrapper[5028]: I1123 09:09:22.053382 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:09:22 crc kubenswrapper[5028]: E1123 09:09:22.056500 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:09:36 crc kubenswrapper[5028]: I1123 09:09:36.052542 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:09:36 crc kubenswrapper[5028]: E1123 09:09:36.053583 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:09:49 crc kubenswrapper[5028]: I1123 09:09:49.054079 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:09:49 crc kubenswrapper[5028]: E1123 09:09:49.055305 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:10:04 crc kubenswrapper[5028]: I1123 09:10:04.053447 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:10:04 crc kubenswrapper[5028]: E1123 09:10:04.054659 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:10:18 crc kubenswrapper[5028]: I1123 09:10:18.053937 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:10:18 crc kubenswrapper[5028]: E1123 09:10:18.056741 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:10:32 crc kubenswrapper[5028]: I1123 09:10:32.053827 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:10:32 crc kubenswrapper[5028]: E1123 09:10:32.054902 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:10:45 crc kubenswrapper[5028]: I1123 09:10:45.056367 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:10:45 crc kubenswrapper[5028]: E1123 09:10:45.058412 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:10:56 crc kubenswrapper[5028]: I1123 09:10:56.053386 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:10:56 crc kubenswrapper[5028]: E1123 09:10:56.054558 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:11:10 crc kubenswrapper[5028]: I1123 09:11:10.053572 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:11:10 crc kubenswrapper[5028]: E1123 09:11:10.054697 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:11:25 crc kubenswrapper[5028]: I1123 09:11:25.053406 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:11:25 crc kubenswrapper[5028]: E1123 09:11:25.054299 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:11:36 crc kubenswrapper[5028]: I1123 09:11:36.940488 5028 generic.go:334] "Generic (PLEG): container finished" podID="0fa4a160-aa17-4390-aa9a-8f2fba7c9836" containerID="6119975e9d0599f39081345a10de3d75380e8dfd1730cf55ffdd4748a1fdee1c" exitCode=0 Nov 23 09:11:36 crc kubenswrapper[5028]: I1123 09:11:36.940626 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" event={"ID":"0fa4a160-aa17-4390-aa9a-8f2fba7c9836","Type":"ContainerDied","Data":"6119975e9d0599f39081345a10de3d75380e8dfd1730cf55ffdd4748a1fdee1c"} Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.499154 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nvgjm"] Nov 23 09:11:38 crc kubenswrapper[5028]: E1123 09:11:38.501053 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="extract-utilities" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.501072 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="extract-utilities" Nov 23 09:11:38 crc kubenswrapper[5028]: E1123 09:11:38.501101 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="extract-content" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.501111 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="extract-content" Nov 23 09:11:38 crc kubenswrapper[5028]: E1123 09:11:38.501148 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="registry-server" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.501156 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="registry-server" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.501414 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa092194-3642-496a-8a42-502d4394e522" containerName="registry-server" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.506174 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.511471 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvgjm"] Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.563155 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8bc\" (UniqueName: \"kubernetes.io/projected/cfcf2923-0199-4491-831c-7bf8cf456f87-kube-api-access-mh8bc\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.563416 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-catalog-content\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.563464 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-utilities\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.669133 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-catalog-content\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.669222 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-utilities\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.669332 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh8bc\" (UniqueName: \"kubernetes.io/projected/cfcf2923-0199-4491-831c-7bf8cf456f87-kube-api-access-mh8bc\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.670043 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-catalog-content\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.670129 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-utilities\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.712537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh8bc\" (UniqueName: \"kubernetes.io/projected/cfcf2923-0199-4491-831c-7bf8cf456f87-kube-api-access-mh8bc\") pod \"certified-operators-nvgjm\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.831480 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.972183 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.979271 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" event={"ID":"0fa4a160-aa17-4390-aa9a-8f2fba7c9836","Type":"ContainerDied","Data":"c1124f5f74986d2bee04e603db862fbc757c44209089c81dc997807d0ec821ac"} Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.979325 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1124f5f74986d2bee04e603db862fbc757c44209089c81dc997807d0ec821ac" Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.990037 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svp8f\" (UniqueName: \"kubernetes.io/projected/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-kube-api-access-svp8f\") pod \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.990453 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-bootstrap-combined-ca-bundle\") pod \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.990498 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ssh-key\") pod \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.990619 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ceph\") pod \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " Nov 23 09:11:38 crc kubenswrapper[5028]: I1123 09:11:38.990752 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-inventory\") pod \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\" (UID: \"0fa4a160-aa17-4390-aa9a-8f2fba7c9836\") " Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.013599 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ceph" (OuterVolumeSpecName: "ceph") pod "0fa4a160-aa17-4390-aa9a-8f2fba7c9836" (UID: "0fa4a160-aa17-4390-aa9a-8f2fba7c9836"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.017136 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-kube-api-access-svp8f" (OuterVolumeSpecName: "kube-api-access-svp8f") pod "0fa4a160-aa17-4390-aa9a-8f2fba7c9836" (UID: "0fa4a160-aa17-4390-aa9a-8f2fba7c9836"). InnerVolumeSpecName "kube-api-access-svp8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.018321 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0fa4a160-aa17-4390-aa9a-8f2fba7c9836" (UID: "0fa4a160-aa17-4390-aa9a-8f2fba7c9836"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.045045 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-inventory" (OuterVolumeSpecName: "inventory") pod "0fa4a160-aa17-4390-aa9a-8f2fba7c9836" (UID: "0fa4a160-aa17-4390-aa9a-8f2fba7c9836"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.052584 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0fa4a160-aa17-4390-aa9a-8f2fba7c9836" (UID: "0fa4a160-aa17-4390-aa9a-8f2fba7c9836"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.096161 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.096198 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.096212 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svp8f\" (UniqueName: \"kubernetes.io/projected/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-kube-api-access-svp8f\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.096222 5028 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.096232 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fa4a160-aa17-4390-aa9a-8f2fba7c9836-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.442966 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nvgjm"] Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.470620 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-cm946"] Nov 23 09:11:39 crc kubenswrapper[5028]: E1123 09:11:39.471461 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa4a160-aa17-4390-aa9a-8f2fba7c9836" containerName="bootstrap-openstack-openstack-cell1" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.471489 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa4a160-aa17-4390-aa9a-8f2fba7c9836" containerName="bootstrap-openstack-openstack-cell1" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.471795 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa4a160-aa17-4390-aa9a-8f2fba7c9836" containerName="bootstrap-openstack-openstack-cell1" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.472851 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.499960 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-cm946"] Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.512384 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ceph\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.513899 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-inventory\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.514249 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsljx\" (UniqueName: \"kubernetes.io/projected/c73c8666-ed55-4274-a8c4-de56ff21909e-kube-api-access-tsljx\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.514501 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ssh-key\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.617556 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ceph\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.617706 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-inventory\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.617806 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsljx\" (UniqueName: \"kubernetes.io/projected/c73c8666-ed55-4274-a8c4-de56ff21909e-kube-api-access-tsljx\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.617960 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ssh-key\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.626546 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ceph\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.627801 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-inventory\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.627994 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ssh-key\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.639316 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsljx\" (UniqueName: \"kubernetes.io/projected/c73c8666-ed55-4274-a8c4-de56ff21909e-kube-api-access-tsljx\") pod \"download-cache-openstack-openstack-cell1-cm946\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.814435 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.996798 5028 generic.go:334] "Generic (PLEG): container finished" podID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerID="fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a" exitCode=0 Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.996933 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerDied","Data":"fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a"} Nov 23 09:11:39 crc kubenswrapper[5028]: I1123 09:11:39.997373 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerStarted","Data":"1531dfd6f4f97d3c474c7c0743049451526e3312846110732ed7d8452a9119d6"} Nov 23 09:11:40 crc kubenswrapper[5028]: I1123 09:11:40.000191 5028 generic.go:334] "Generic (PLEG): container finished" podID="71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" containerID="4e0c96106ab0ac3fb5cf6f7892c515a6e15957e3fd254d330f8de7019b88b0da" exitCode=0 Nov 23 09:11:40 crc kubenswrapper[5028]: I1123 09:11:40.000312 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-9vfht" Nov 23 09:11:40 crc kubenswrapper[5028]: I1123 09:11:40.000630 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" event={"ID":"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5","Type":"ContainerDied","Data":"4e0c96106ab0ac3fb5cf6f7892c515a6e15957e3fd254d330f8de7019b88b0da"} Nov 23 09:11:40 crc kubenswrapper[5028]: I1123 09:11:40.054060 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:11:40 crc kubenswrapper[5028]: I1123 09:11:40.391193 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-cm946"] Nov 23 09:11:40 crc kubenswrapper[5028]: W1123 09:11:40.403094 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc73c8666_ed55_4274_a8c4_de56ff21909e.slice/crio-f1bccb4138d0763a970d6d949628026fbb6f6adbb201ee43afdbd36f282bad58 WatchSource:0}: Error finding container f1bccb4138d0763a970d6d949628026fbb6f6adbb201ee43afdbd36f282bad58: Status 404 returned error can't find the container with id f1bccb4138d0763a970d6d949628026fbb6f6adbb201ee43afdbd36f282bad58 Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.022817 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"7fc33854a4e7d73aa92d206882e4ff95709d2d29698b9f2a99337a7d861512b5"} Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.028517 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerStarted","Data":"f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38"} Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.031334 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-cm946" event={"ID":"c73c8666-ed55-4274-a8c4-de56ff21909e","Type":"ContainerStarted","Data":"f1bccb4138d0763a970d6d949628026fbb6f6adbb201ee43afdbd36f282bad58"} Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.521543 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.573386 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsd9f\" (UniqueName: \"kubernetes.io/projected/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-kube-api-access-vsd9f\") pod \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.573623 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-bootstrap-combined-ca-bundle\") pod \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.573721 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-inventory\") pod \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.573821 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-ssh-key\") pod \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\" (UID: \"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5\") " Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.581819 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-kube-api-access-vsd9f" (OuterVolumeSpecName: "kube-api-access-vsd9f") pod "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" (UID: "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5"). InnerVolumeSpecName "kube-api-access-vsd9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.583505 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" (UID: "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.609513 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" (UID: "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.611720 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-inventory" (OuterVolumeSpecName: "inventory") pod "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" (UID: "71b95d4c-b5f3-457d-bb73-c63c1d9f04f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.678367 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsd9f\" (UniqueName: \"kubernetes.io/projected/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-kube-api-access-vsd9f\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.678915 5028 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.678927 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:41 crc kubenswrapper[5028]: I1123 09:11:41.678940 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b95d4c-b5f3-457d-bb73-c63c1d9f04f5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.047706 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.048928 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-networker-zrj2l" event={"ID":"71b95d4c-b5f3-457d-bb73-c63c1d9f04f5","Type":"ContainerDied","Data":"f9796eab223b4853c89407920c24ebe3ebc8484fde5288ee843389f780de8a67"} Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.049154 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9796eab223b4853c89407920c24ebe3ebc8484fde5288ee843389f780de8a67" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.049999 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-cm946" event={"ID":"c73c8666-ed55-4274-a8c4-de56ff21909e","Type":"ContainerStarted","Data":"7bd6381d3e6dfbbdf609c04f5053b84cd19ff981a842a80f50603aa2d96b9697"} Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.095773 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-cm946" podStartSLOduration=2.63559315 podStartE2EDuration="3.095734719s" podCreationTimestamp="2025-11-23 09:11:39 +0000 UTC" firstStartedPulling="2025-11-23 09:11:40.407145246 +0000 UTC m=+8484.104550035" lastFinishedPulling="2025-11-23 09:11:40.867286825 +0000 UTC m=+8484.564691604" observedRunningTime="2025-11-23 09:11:42.069674092 +0000 UTC m=+8485.767078871" watchObservedRunningTime="2025-11-23 09:11:42.095734719 +0000 UTC m=+8485.793139498" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.140383 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-networker-pfwrm"] Nov 23 09:11:42 crc kubenswrapper[5028]: E1123 09:11:42.141000 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" containerName="bootstrap-openstack-openstack-networker" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.141030 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" containerName="bootstrap-openstack-openstack-networker" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.141340 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b95d4c-b5f3-457d-bb73-c63c1d9f04f5" containerName="bootstrap-openstack-openstack-networker" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.142288 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.144768 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.144856 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.157306 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-networker-pfwrm"] Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.204858 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbfh\" (UniqueName: \"kubernetes.io/projected/d7743159-c6ec-414e-8bd2-523769405308-kube-api-access-lvbfh\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.205006 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-ssh-key\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.205064 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-inventory\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.307858 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-ssh-key\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.307978 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-inventory\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.308102 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbfh\" (UniqueName: \"kubernetes.io/projected/d7743159-c6ec-414e-8bd2-523769405308-kube-api-access-lvbfh\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.315225 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-ssh-key\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.315539 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-inventory\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.330485 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbfh\" (UniqueName: \"kubernetes.io/projected/d7743159-c6ec-414e-8bd2-523769405308-kube-api-access-lvbfh\") pod \"download-cache-openstack-openstack-networker-pfwrm\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:42 crc kubenswrapper[5028]: I1123 09:11:42.532693 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:11:43 crc kubenswrapper[5028]: I1123 09:11:43.067482 5028 generic.go:334] "Generic (PLEG): container finished" podID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerID="f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38" exitCode=0 Nov 23 09:11:43 crc kubenswrapper[5028]: I1123 09:11:43.091958 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerDied","Data":"f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38"} Nov 23 09:11:43 crc kubenswrapper[5028]: I1123 09:11:43.175557 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-networker-pfwrm"] Nov 23 09:11:44 crc kubenswrapper[5028]: I1123 09:11:44.082379 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerStarted","Data":"d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af"} Nov 23 09:11:44 crc kubenswrapper[5028]: I1123 09:11:44.088168 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" event={"ID":"d7743159-c6ec-414e-8bd2-523769405308","Type":"ContainerStarted","Data":"c753f004212bf831c6ac16246ae3affbc3665d5c119bf98b34971e3076e3639e"} Nov 23 09:11:44 crc kubenswrapper[5028]: I1123 09:11:44.088208 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" event={"ID":"d7743159-c6ec-414e-8bd2-523769405308","Type":"ContainerStarted","Data":"cd5fb697933d500ded065b4c617422461b61c06c6adf9ebd81ddc2fa30b78ce1"} Nov 23 09:11:44 crc kubenswrapper[5028]: I1123 09:11:44.108453 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nvgjm" podStartSLOduration=2.584026476 podStartE2EDuration="6.108428234s" podCreationTimestamp="2025-11-23 09:11:38 +0000 UTC" firstStartedPulling="2025-11-23 09:11:39.999956063 +0000 UTC m=+8483.697360842" lastFinishedPulling="2025-11-23 09:11:43.524357781 +0000 UTC m=+8487.221762600" observedRunningTime="2025-11-23 09:11:44.104419944 +0000 UTC m=+8487.801824743" watchObservedRunningTime="2025-11-23 09:11:44.108428234 +0000 UTC m=+8487.805833033" Nov 23 09:11:44 crc kubenswrapper[5028]: I1123 09:11:44.131160 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" podStartSLOduration=1.696566483 podStartE2EDuration="2.131136757s" podCreationTimestamp="2025-11-23 09:11:42 +0000 UTC" firstStartedPulling="2025-11-23 09:11:43.18664655 +0000 UTC m=+8486.884051329" lastFinishedPulling="2025-11-23 09:11:43.621216824 +0000 UTC m=+8487.318621603" observedRunningTime="2025-11-23 09:11:44.122480102 +0000 UTC m=+8487.819884881" watchObservedRunningTime="2025-11-23 09:11:44.131136757 +0000 UTC m=+8487.828541536" Nov 23 09:11:48 crc kubenswrapper[5028]: I1123 09:11:48.833162 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:48 crc kubenswrapper[5028]: I1123 09:11:48.834006 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:48 crc kubenswrapper[5028]: I1123 09:11:48.903462 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:49 crc kubenswrapper[5028]: I1123 09:11:49.193876 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:49 crc kubenswrapper[5028]: I1123 09:11:49.276809 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvgjm"] Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.163040 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nvgjm" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="registry-server" containerID="cri-o://d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af" gracePeriod=2 Nov 23 09:11:51 crc kubenswrapper[5028]: E1123 09:11:51.420519 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfcf2923_0199_4491_831c_7bf8cf456f87.slice/crio-conmon-d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af.scope\": RecentStats: unable to find data in memory cache]" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.778701 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.858747 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh8bc\" (UniqueName: \"kubernetes.io/projected/cfcf2923-0199-4491-831c-7bf8cf456f87-kube-api-access-mh8bc\") pod \"cfcf2923-0199-4491-831c-7bf8cf456f87\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.858968 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-catalog-content\") pod \"cfcf2923-0199-4491-831c-7bf8cf456f87\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.859044 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-utilities\") pod \"cfcf2923-0199-4491-831c-7bf8cf456f87\" (UID: \"cfcf2923-0199-4491-831c-7bf8cf456f87\") " Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.859683 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-utilities" (OuterVolumeSpecName: "utilities") pod "cfcf2923-0199-4491-831c-7bf8cf456f87" (UID: "cfcf2923-0199-4491-831c-7bf8cf456f87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.874357 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfcf2923-0199-4491-831c-7bf8cf456f87-kube-api-access-mh8bc" (OuterVolumeSpecName: "kube-api-access-mh8bc") pod "cfcf2923-0199-4491-831c-7bf8cf456f87" (UID: "cfcf2923-0199-4491-831c-7bf8cf456f87"). InnerVolumeSpecName "kube-api-access-mh8bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.899424 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfcf2923-0199-4491-831c-7bf8cf456f87" (UID: "cfcf2923-0199-4491-831c-7bf8cf456f87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.960675 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.960707 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfcf2923-0199-4491-831c-7bf8cf456f87-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:51 crc kubenswrapper[5028]: I1123 09:11:51.960720 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh8bc\" (UniqueName: \"kubernetes.io/projected/cfcf2923-0199-4491-831c-7bf8cf456f87-kube-api-access-mh8bc\") on node \"crc\" DevicePath \"\"" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.180683 5028 generic.go:334] "Generic (PLEG): container finished" podID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerID="d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af" exitCode=0 Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.180771 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerDied","Data":"d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af"} Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.180844 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nvgjm" event={"ID":"cfcf2923-0199-4491-831c-7bf8cf456f87","Type":"ContainerDied","Data":"1531dfd6f4f97d3c474c7c0743049451526e3312846110732ed7d8452a9119d6"} Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.180875 5028 scope.go:117] "RemoveContainer" containerID="d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.180779 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nvgjm" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.223153 5028 scope.go:117] "RemoveContainer" containerID="f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.229641 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nvgjm"] Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.243333 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nvgjm"] Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.263307 5028 scope.go:117] "RemoveContainer" containerID="fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.303152 5028 scope.go:117] "RemoveContainer" containerID="d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af" Nov 23 09:11:52 crc kubenswrapper[5028]: E1123 09:11:52.303773 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af\": container with ID starting with d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af not found: ID does not exist" containerID="d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.303808 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af"} err="failed to get container status \"d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af\": rpc error: code = NotFound desc = could not find container \"d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af\": container with ID starting with d923031efcada6bfc29cd26fbee6d3eeff00a3eea1f776e855459892534ea2af not found: ID does not exist" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.303832 5028 scope.go:117] "RemoveContainer" containerID="f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38" Nov 23 09:11:52 crc kubenswrapper[5028]: E1123 09:11:52.304236 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38\": container with ID starting with f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38 not found: ID does not exist" containerID="f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.304279 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38"} err="failed to get container status \"f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38\": rpc error: code = NotFound desc = could not find container \"f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38\": container with ID starting with f9e6a1f6f56aadc5d6e364031945f84f1c9642d57cab97ffc513e2312f478f38 not found: ID does not exist" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.304310 5028 scope.go:117] "RemoveContainer" containerID="fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a" Nov 23 09:11:52 crc kubenswrapper[5028]: E1123 09:11:52.304709 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a\": container with ID starting with fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a not found: ID does not exist" containerID="fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a" Nov 23 09:11:52 crc kubenswrapper[5028]: I1123 09:11:52.304753 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a"} err="failed to get container status \"fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a\": rpc error: code = NotFound desc = could not find container \"fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a\": container with ID starting with fd8e62496682497c2dee4d07eb412b921bf31862f49751bc823cd37565dc412a not found: ID does not exist" Nov 23 09:11:53 crc kubenswrapper[5028]: I1123 09:11:53.079250 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" path="/var/lib/kubelet/pods/cfcf2923-0199-4491-831c-7bf8cf456f87/volumes" Nov 23 09:12:51 crc kubenswrapper[5028]: I1123 09:12:51.894182 5028 generic.go:334] "Generic (PLEG): container finished" podID="d7743159-c6ec-414e-8bd2-523769405308" containerID="c753f004212bf831c6ac16246ae3affbc3665d5c119bf98b34971e3076e3639e" exitCode=0 Nov 23 09:12:51 crc kubenswrapper[5028]: I1123 09:12:51.894332 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" event={"ID":"d7743159-c6ec-414e-8bd2-523769405308","Type":"ContainerDied","Data":"c753f004212bf831c6ac16246ae3affbc3665d5c119bf98b34971e3076e3639e"} Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.516878 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.589261 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-ssh-key\") pod \"d7743159-c6ec-414e-8bd2-523769405308\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.589415 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvbfh\" (UniqueName: \"kubernetes.io/projected/d7743159-c6ec-414e-8bd2-523769405308-kube-api-access-lvbfh\") pod \"d7743159-c6ec-414e-8bd2-523769405308\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.589709 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-inventory\") pod \"d7743159-c6ec-414e-8bd2-523769405308\" (UID: \"d7743159-c6ec-414e-8bd2-523769405308\") " Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.598357 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7743159-c6ec-414e-8bd2-523769405308-kube-api-access-lvbfh" (OuterVolumeSpecName: "kube-api-access-lvbfh") pod "d7743159-c6ec-414e-8bd2-523769405308" (UID: "d7743159-c6ec-414e-8bd2-523769405308"). InnerVolumeSpecName "kube-api-access-lvbfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.626803 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-inventory" (OuterVolumeSpecName: "inventory") pod "d7743159-c6ec-414e-8bd2-523769405308" (UID: "d7743159-c6ec-414e-8bd2-523769405308"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.630457 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d7743159-c6ec-414e-8bd2-523769405308" (UID: "d7743159-c6ec-414e-8bd2-523769405308"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.693649 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.693701 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7743159-c6ec-414e-8bd2-523769405308-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.693717 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvbfh\" (UniqueName: \"kubernetes.io/projected/d7743159-c6ec-414e-8bd2-523769405308-kube-api-access-lvbfh\") on node \"crc\" DevicePath \"\"" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.932337 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" event={"ID":"d7743159-c6ec-414e-8bd2-523769405308","Type":"ContainerDied","Data":"cd5fb697933d500ded065b4c617422461b61c06c6adf9ebd81ddc2fa30b78ce1"} Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.932879 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5fb697933d500ded065b4c617422461b61c06c6adf9ebd81ddc2fa30b78ce1" Nov 23 09:12:53 crc kubenswrapper[5028]: I1123 09:12:53.932417 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-networker-pfwrm" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.054123 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-networker-fzgtv"] Nov 23 09:12:54 crc kubenswrapper[5028]: E1123 09:12:54.055342 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="extract-content" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.055371 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="extract-content" Nov 23 09:12:54 crc kubenswrapper[5028]: E1123 09:12:54.055400 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="extract-utilities" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.055411 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="extract-utilities" Nov 23 09:12:54 crc kubenswrapper[5028]: E1123 09:12:54.055448 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="registry-server" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.055458 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="registry-server" Nov 23 09:12:54 crc kubenswrapper[5028]: E1123 09:12:54.055508 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7743159-c6ec-414e-8bd2-523769405308" containerName="download-cache-openstack-openstack-networker" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.055524 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7743159-c6ec-414e-8bd2-523769405308" containerName="download-cache-openstack-openstack-networker" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.057879 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfcf2923-0199-4491-831c-7bf8cf456f87" containerName="registry-server" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.057906 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7743159-c6ec-414e-8bd2-523769405308" containerName="download-cache-openstack-openstack-networker" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.063585 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.106466 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-networker-fzgtv"] Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.106821 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.107477 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.206777 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-inventory\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.207277 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-ssh-key\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.207461 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whvdp\" (UniqueName: \"kubernetes.io/projected/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-kube-api-access-whvdp\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.310042 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whvdp\" (UniqueName: \"kubernetes.io/projected/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-kube-api-access-whvdp\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.310268 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-inventory\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.310308 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-ssh-key\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.317771 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-ssh-key\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.318411 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-inventory\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.333266 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whvdp\" (UniqueName: \"kubernetes.io/projected/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-kube-api-access-whvdp\") pod \"configure-network-openstack-openstack-networker-fzgtv\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:54 crc kubenswrapper[5028]: I1123 09:12:54.435850 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:12:55 crc kubenswrapper[5028]: I1123 09:12:55.044224 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-networker-fzgtv"] Nov 23 09:12:55 crc kubenswrapper[5028]: I1123 09:12:55.073404 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:12:55 crc kubenswrapper[5028]: I1123 09:12:55.959574 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" event={"ID":"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0","Type":"ContainerStarted","Data":"3ce88a7889ca66beef1d6b39f4f1803259196671765a1b7091cbbe04e88e68e0"} Nov 23 09:12:55 crc kubenswrapper[5028]: I1123 09:12:55.960409 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" event={"ID":"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0","Type":"ContainerStarted","Data":"917e539f6907f68004332585b7e57a24165cb9337a6c93564794194650feb19d"} Nov 23 09:12:55 crc kubenswrapper[5028]: I1123 09:12:55.987729 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" podStartSLOduration=1.464440915 podStartE2EDuration="1.987707269s" podCreationTimestamp="2025-11-23 09:12:54 +0000 UTC" firstStartedPulling="2025-11-23 09:12:55.073087683 +0000 UTC m=+8558.770492462" lastFinishedPulling="2025-11-23 09:12:55.596353997 +0000 UTC m=+8559.293758816" observedRunningTime="2025-11-23 09:12:55.982575441 +0000 UTC m=+8559.679980230" watchObservedRunningTime="2025-11-23 09:12:55.987707269 +0000 UTC m=+8559.685112048" Nov 23 09:13:39 crc kubenswrapper[5028]: I1123 09:13:39.777344 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" podUID="3f4726b9-823e-4abf-b301-6c020b882874" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:13:39 crc kubenswrapper[5028]: I1123 09:13:39.779006 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd445pzzk" podUID="3f4726b9-823e-4abf-b301-6c020b882874" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.004720 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4sct6"] Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.007355 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.027602 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sct6"] Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.140093 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-catalog-content\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.140225 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-utilities\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.141688 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6mn\" (UniqueName: \"kubernetes.io/projected/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-kube-api-access-4v6mn\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.244537 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6mn\" (UniqueName: \"kubernetes.io/projected/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-kube-api-access-4v6mn\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.244803 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-catalog-content\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.244913 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-utilities\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.245474 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-catalog-content\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.245780 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-utilities\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.268546 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6mn\" (UniqueName: \"kubernetes.io/projected/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-kube-api-access-4v6mn\") pod \"redhat-marketplace-4sct6\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.343715 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:40 crc kubenswrapper[5028]: I1123 09:13:40.855462 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sct6"] Nov 23 09:13:41 crc kubenswrapper[5028]: I1123 09:13:41.522116 5028 generic.go:334] "Generic (PLEG): container finished" podID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerID="5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444" exitCode=0 Nov 23 09:13:41 crc kubenswrapper[5028]: I1123 09:13:41.522249 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerDied","Data":"5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444"} Nov 23 09:13:41 crc kubenswrapper[5028]: I1123 09:13:41.522739 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerStarted","Data":"8c8d64c13801549053f9dd4ba536de40973d6655b77973d68d72f429e5185711"} Nov 23 09:13:42 crc kubenswrapper[5028]: I1123 09:13:42.535353 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerStarted","Data":"07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546"} Nov 23 09:13:43 crc kubenswrapper[5028]: I1123 09:13:43.550261 5028 generic.go:334] "Generic (PLEG): container finished" podID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerID="07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546" exitCode=0 Nov 23 09:13:43 crc kubenswrapper[5028]: I1123 09:13:43.550346 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerDied","Data":"07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546"} Nov 23 09:13:44 crc kubenswrapper[5028]: I1123 09:13:44.565823 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerStarted","Data":"65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787"} Nov 23 09:13:44 crc kubenswrapper[5028]: I1123 09:13:44.588490 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4sct6" podStartSLOduration=3.057924263 podStartE2EDuration="5.588446957s" podCreationTimestamp="2025-11-23 09:13:39 +0000 UTC" firstStartedPulling="2025-11-23 09:13:41.524506816 +0000 UTC m=+8605.221911595" lastFinishedPulling="2025-11-23 09:13:44.05502951 +0000 UTC m=+8607.752434289" observedRunningTime="2025-11-23 09:13:44.588042447 +0000 UTC m=+8608.285447236" watchObservedRunningTime="2025-11-23 09:13:44.588446957 +0000 UTC m=+8608.285851736" Nov 23 09:13:46 crc kubenswrapper[5028]: I1123 09:13:46.602016 5028 generic.go:334] "Generic (PLEG): container finished" podID="c73c8666-ed55-4274-a8c4-de56ff21909e" containerID="7bd6381d3e6dfbbdf609c04f5053b84cd19ff981a842a80f50603aa2d96b9697" exitCode=0 Nov 23 09:13:46 crc kubenswrapper[5028]: I1123 09:13:46.602082 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-cm946" event={"ID":"c73c8666-ed55-4274-a8c4-de56ff21909e","Type":"ContainerDied","Data":"7bd6381d3e6dfbbdf609c04f5053b84cd19ff981a842a80f50603aa2d96b9697"} Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.232379 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.383049 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-inventory\") pod \"c73c8666-ed55-4274-a8c4-de56ff21909e\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.383562 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ssh-key\") pod \"c73c8666-ed55-4274-a8c4-de56ff21909e\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.383795 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsljx\" (UniqueName: \"kubernetes.io/projected/c73c8666-ed55-4274-a8c4-de56ff21909e-kube-api-access-tsljx\") pod \"c73c8666-ed55-4274-a8c4-de56ff21909e\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.383934 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ceph\") pod \"c73c8666-ed55-4274-a8c4-de56ff21909e\" (UID: \"c73c8666-ed55-4274-a8c4-de56ff21909e\") " Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.391188 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ceph" (OuterVolumeSpecName: "ceph") pod "c73c8666-ed55-4274-a8c4-de56ff21909e" (UID: "c73c8666-ed55-4274-a8c4-de56ff21909e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.403908 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73c8666-ed55-4274-a8c4-de56ff21909e-kube-api-access-tsljx" (OuterVolumeSpecName: "kube-api-access-tsljx") pod "c73c8666-ed55-4274-a8c4-de56ff21909e" (UID: "c73c8666-ed55-4274-a8c4-de56ff21909e"). InnerVolumeSpecName "kube-api-access-tsljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.416336 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c73c8666-ed55-4274-a8c4-de56ff21909e" (UID: "c73c8666-ed55-4274-a8c4-de56ff21909e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.419487 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-inventory" (OuterVolumeSpecName: "inventory") pod "c73c8666-ed55-4274-a8c4-de56ff21909e" (UID: "c73c8666-ed55-4274-a8c4-de56ff21909e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.488991 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.489059 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.489081 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c73c8666-ed55-4274-a8c4-de56ff21909e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.489102 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsljx\" (UniqueName: \"kubernetes.io/projected/c73c8666-ed55-4274-a8c4-de56ff21909e-kube-api-access-tsljx\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.630193 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-cm946" event={"ID":"c73c8666-ed55-4274-a8c4-de56ff21909e","Type":"ContainerDied","Data":"f1bccb4138d0763a970d6d949628026fbb6f6adbb201ee43afdbd36f282bad58"} Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.630252 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bccb4138d0763a970d6d949628026fbb6f6adbb201ee43afdbd36f282bad58" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.630309 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-cm946" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.752080 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-6zfpv"] Nov 23 09:13:48 crc kubenswrapper[5028]: E1123 09:13:48.752737 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73c8666-ed55-4274-a8c4-de56ff21909e" containerName="download-cache-openstack-openstack-cell1" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.752758 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73c8666-ed55-4274-a8c4-de56ff21909e" containerName="download-cache-openstack-openstack-cell1" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.753047 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73c8666-ed55-4274-a8c4-de56ff21909e" containerName="download-cache-openstack-openstack-cell1" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.754008 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.757135 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.759277 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.769860 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-6zfpv"] Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.797222 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ssh-key\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.797327 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftsxt\" (UniqueName: \"kubernetes.io/projected/052ccf3b-c34b-4dc5-a81a-0aeec151c343-kube-api-access-ftsxt\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.797412 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ceph\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.797493 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-inventory\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.899529 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ssh-key\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.899642 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftsxt\" (UniqueName: \"kubernetes.io/projected/052ccf3b-c34b-4dc5-a81a-0aeec151c343-kube-api-access-ftsxt\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.899729 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ceph\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.899806 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-inventory\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.906285 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ssh-key\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.906574 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ceph\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.906809 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-inventory\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:48 crc kubenswrapper[5028]: I1123 09:13:48.920421 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftsxt\" (UniqueName: \"kubernetes.io/projected/052ccf3b-c34b-4dc5-a81a-0aeec151c343-kube-api-access-ftsxt\") pod \"configure-network-openstack-openstack-cell1-6zfpv\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:49 crc kubenswrapper[5028]: I1123 09:13:49.101256 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:13:49 crc kubenswrapper[5028]: I1123 09:13:49.704618 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-6zfpv"] Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.344860 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.344925 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.420412 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.658707 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" event={"ID":"052ccf3b-c34b-4dc5-a81a-0aeec151c343","Type":"ContainerStarted","Data":"f1246ae05ad0d35b7575e111451780567c6f3b8b810fc51d6fb3932b2ceacf21"} Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.659192 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" event={"ID":"052ccf3b-c34b-4dc5-a81a-0aeec151c343","Type":"ContainerStarted","Data":"c3f081a520fecd727b19c74d083b6b9bbecc8da98ce1b1052c66a41a96973724"} Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.695524 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" podStartSLOduration=2.212840007 podStartE2EDuration="2.695495444s" podCreationTimestamp="2025-11-23 09:13:48 +0000 UTC" firstStartedPulling="2025-11-23 09:13:49.704581284 +0000 UTC m=+8613.401986063" lastFinishedPulling="2025-11-23 09:13:50.187236721 +0000 UTC m=+8613.884641500" observedRunningTime="2025-11-23 09:13:50.679556048 +0000 UTC m=+8614.376960827" watchObservedRunningTime="2025-11-23 09:13:50.695495444 +0000 UTC m=+8614.392900223" Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.725702 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:50 crc kubenswrapper[5028]: I1123 09:13:50.788603 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sct6"] Nov 23 09:13:52 crc kubenswrapper[5028]: I1123 09:13:52.681359 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4sct6" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="registry-server" containerID="cri-o://65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787" gracePeriod=2 Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.221102 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.310009 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v6mn\" (UniqueName: \"kubernetes.io/projected/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-kube-api-access-4v6mn\") pod \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.310338 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-utilities\") pod \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.310405 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-catalog-content\") pod \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\" (UID: \"18f6f5cd-3d90-4d20-a68e-fd8c53578e07\") " Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.312102 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-utilities" (OuterVolumeSpecName: "utilities") pod "18f6f5cd-3d90-4d20-a68e-fd8c53578e07" (UID: "18f6f5cd-3d90-4d20-a68e-fd8c53578e07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.320807 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-kube-api-access-4v6mn" (OuterVolumeSpecName: "kube-api-access-4v6mn") pod "18f6f5cd-3d90-4d20-a68e-fd8c53578e07" (UID: "18f6f5cd-3d90-4d20-a68e-fd8c53578e07"). InnerVolumeSpecName "kube-api-access-4v6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.330996 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18f6f5cd-3d90-4d20-a68e-fd8c53578e07" (UID: "18f6f5cd-3d90-4d20-a68e-fd8c53578e07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.413274 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.413571 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.413669 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v6mn\" (UniqueName: \"kubernetes.io/projected/18f6f5cd-3d90-4d20-a68e-fd8c53578e07-kube-api-access-4v6mn\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.699392 5028 generic.go:334] "Generic (PLEG): container finished" podID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerID="65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787" exitCode=0 Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.699524 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4sct6" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.699491 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerDied","Data":"65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787"} Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.700197 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4sct6" event={"ID":"18f6f5cd-3d90-4d20-a68e-fd8c53578e07","Type":"ContainerDied","Data":"8c8d64c13801549053f9dd4ba536de40973d6655b77973d68d72f429e5185711"} Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.700236 5028 scope.go:117] "RemoveContainer" containerID="65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.732125 5028 scope.go:117] "RemoveContainer" containerID="07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.745566 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sct6"] Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.754849 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4sct6"] Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.768866 5028 scope.go:117] "RemoveContainer" containerID="5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.806991 5028 scope.go:117] "RemoveContainer" containerID="65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787" Nov 23 09:13:53 crc kubenswrapper[5028]: E1123 09:13:53.807299 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787\": container with ID starting with 65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787 not found: ID does not exist" containerID="65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.807324 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787"} err="failed to get container status \"65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787\": rpc error: code = NotFound desc = could not find container \"65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787\": container with ID starting with 65478b95266c79cc5c8e7fe5f272aa13f3d555e078a3220eedd9c0977ca44787 not found: ID does not exist" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.807347 5028 scope.go:117] "RemoveContainer" containerID="07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546" Nov 23 09:13:53 crc kubenswrapper[5028]: E1123 09:13:53.807535 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546\": container with ID starting with 07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546 not found: ID does not exist" containerID="07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.807556 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546"} err="failed to get container status \"07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546\": rpc error: code = NotFound desc = could not find container \"07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546\": container with ID starting with 07612dab941d0665c2638a7c4f038959c5cf7aa5165c4ac6b56cdfbef2dd5546 not found: ID does not exist" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.807572 5028 scope.go:117] "RemoveContainer" containerID="5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444" Nov 23 09:13:53 crc kubenswrapper[5028]: E1123 09:13:53.808385 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444\": container with ID starting with 5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444 not found: ID does not exist" containerID="5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444" Nov 23 09:13:53 crc kubenswrapper[5028]: I1123 09:13:53.808415 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444"} err="failed to get container status \"5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444\": rpc error: code = NotFound desc = could not find container \"5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444\": container with ID starting with 5278bd4f136e5f90e1c492de23d2aa06a6cee8ece82ad21cc38317730b212444 not found: ID does not exist" Nov 23 09:13:55 crc kubenswrapper[5028]: I1123 09:13:55.071745 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" path="/var/lib/kubelet/pods/18f6f5cd-3d90-4d20-a68e-fd8c53578e07/volumes" Nov 23 09:13:56 crc kubenswrapper[5028]: I1123 09:13:56.739069 5028 generic.go:334] "Generic (PLEG): container finished" podID="1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" containerID="3ce88a7889ca66beef1d6b39f4f1803259196671765a1b7091cbbe04e88e68e0" exitCode=0 Nov 23 09:13:56 crc kubenswrapper[5028]: I1123 09:13:56.739166 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" event={"ID":"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0","Type":"ContainerDied","Data":"3ce88a7889ca66beef1d6b39f4f1803259196671765a1b7091cbbe04e88e68e0"} Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.342522 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.444453 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whvdp\" (UniqueName: \"kubernetes.io/projected/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-kube-api-access-whvdp\") pod \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.444941 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-inventory\") pod \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.445036 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-ssh-key\") pod \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\" (UID: \"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0\") " Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.452625 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-kube-api-access-whvdp" (OuterVolumeSpecName: "kube-api-access-whvdp") pod "1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" (UID: "1ecadc08-3d9f-4d0c-b36b-57a5631f71a0"). InnerVolumeSpecName "kube-api-access-whvdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.476835 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-inventory" (OuterVolumeSpecName: "inventory") pod "1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" (UID: "1ecadc08-3d9f-4d0c-b36b-57a5631f71a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.478962 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" (UID: "1ecadc08-3d9f-4d0c-b36b-57a5631f71a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.547879 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.547931 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.547941 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whvdp\" (UniqueName: \"kubernetes.io/projected/1ecadc08-3d9f-4d0c-b36b-57a5631f71a0-kube-api-access-whvdp\") on node \"crc\" DevicePath \"\"" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.778571 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" event={"ID":"1ecadc08-3d9f-4d0c-b36b-57a5631f71a0","Type":"ContainerDied","Data":"917e539f6907f68004332585b7e57a24165cb9337a6c93564794194650feb19d"} Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.779237 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="917e539f6907f68004332585b7e57a24165cb9337a6c93564794194650feb19d" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.779328 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-networker-fzgtv" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.898345 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-networker-4fc87"] Nov 23 09:13:58 crc kubenswrapper[5028]: E1123 09:13:58.898845 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="registry-server" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.898865 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="registry-server" Nov 23 09:13:58 crc kubenswrapper[5028]: E1123 09:13:58.898880 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" containerName="configure-network-openstack-openstack-networker" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.898889 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" containerName="configure-network-openstack-openstack-networker" Nov 23 09:13:58 crc kubenswrapper[5028]: E1123 09:13:58.898903 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="extract-content" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.898908 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="extract-content" Nov 23 09:13:58 crc kubenswrapper[5028]: E1123 09:13:58.898935 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="extract-utilities" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.898941 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="extract-utilities" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.899362 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ecadc08-3d9f-4d0c-b36b-57a5631f71a0" containerName="configure-network-openstack-openstack-networker" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.899388 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f6f5cd-3d90-4d20-a68e-fd8c53578e07" containerName="registry-server" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.900282 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.903588 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.909456 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.916814 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-networker-4fc87"] Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.959199 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-inventory\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.959255 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-ssh-key\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:58 crc kubenswrapper[5028]: I1123 09:13:58.959524 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9z5\" (UniqueName: \"kubernetes.io/projected/fb11ea62-5c93-4e7a-8c64-bc843b862244-kube-api-access-wx9z5\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.062998 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-inventory\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.063079 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-ssh-key\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.063228 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9z5\" (UniqueName: \"kubernetes.io/projected/fb11ea62-5c93-4e7a-8c64-bc843b862244-kube-api-access-wx9z5\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.068605 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-ssh-key\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.069674 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-inventory\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.089128 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx9z5\" (UniqueName: \"kubernetes.io/projected/fb11ea62-5c93-4e7a-8c64-bc843b862244-kube-api-access-wx9z5\") pod \"validate-network-openstack-openstack-networker-4fc87\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.219862 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:13:59 crc kubenswrapper[5028]: I1123 09:13:59.798391 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-networker-4fc87"] Nov 23 09:14:00 crc kubenswrapper[5028]: I1123 09:14:00.807019 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-4fc87" event={"ID":"fb11ea62-5c93-4e7a-8c64-bc843b862244","Type":"ContainerStarted","Data":"c879d5a531ed9167c40b2975e4370a1b6a223b9bbbf4ade33be4e5590435eb44"} Nov 23 09:14:00 crc kubenswrapper[5028]: I1123 09:14:00.807702 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-4fc87" event={"ID":"fb11ea62-5c93-4e7a-8c64-bc843b862244","Type":"ContainerStarted","Data":"763e57b0e664c63d29db91a83c6130c5a90d29a27ce117f5a2222c6faa8034a6"} Nov 23 09:14:00 crc kubenswrapper[5028]: I1123 09:14:00.852260 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-networker-4fc87" podStartSLOduration=2.361319461 podStartE2EDuration="2.852226543s" podCreationTimestamp="2025-11-23 09:13:58 +0000 UTC" firstStartedPulling="2025-11-23 09:13:59.802560695 +0000 UTC m=+8623.499965474" lastFinishedPulling="2025-11-23 09:14:00.293467747 +0000 UTC m=+8623.990872556" observedRunningTime="2025-11-23 09:14:00.836208755 +0000 UTC m=+8624.533613574" watchObservedRunningTime="2025-11-23 09:14:00.852226543 +0000 UTC m=+8624.549631352" Nov 23 09:14:00 crc kubenswrapper[5028]: I1123 09:14:00.946161 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:14:00 crc kubenswrapper[5028]: I1123 09:14:00.946248 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:14:05 crc kubenswrapper[5028]: I1123 09:14:05.869795 5028 generic.go:334] "Generic (PLEG): container finished" podID="fb11ea62-5c93-4e7a-8c64-bc843b862244" containerID="c879d5a531ed9167c40b2975e4370a1b6a223b9bbbf4ade33be4e5590435eb44" exitCode=0 Nov 23 09:14:05 crc kubenswrapper[5028]: I1123 09:14:05.869884 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-4fc87" event={"ID":"fb11ea62-5c93-4e7a-8c64-bc843b862244","Type":"ContainerDied","Data":"c879d5a531ed9167c40b2975e4370a1b6a223b9bbbf4ade33be4e5590435eb44"} Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.385074 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.486224 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-ssh-key\") pod \"fb11ea62-5c93-4e7a-8c64-bc843b862244\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.486342 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx9z5\" (UniqueName: \"kubernetes.io/projected/fb11ea62-5c93-4e7a-8c64-bc843b862244-kube-api-access-wx9z5\") pod \"fb11ea62-5c93-4e7a-8c64-bc843b862244\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.487226 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-inventory\") pod \"fb11ea62-5c93-4e7a-8c64-bc843b862244\" (UID: \"fb11ea62-5c93-4e7a-8c64-bc843b862244\") " Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.493352 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb11ea62-5c93-4e7a-8c64-bc843b862244-kube-api-access-wx9z5" (OuterVolumeSpecName: "kube-api-access-wx9z5") pod "fb11ea62-5c93-4e7a-8c64-bc843b862244" (UID: "fb11ea62-5c93-4e7a-8c64-bc843b862244"). InnerVolumeSpecName "kube-api-access-wx9z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.518340 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-inventory" (OuterVolumeSpecName: "inventory") pod "fb11ea62-5c93-4e7a-8c64-bc843b862244" (UID: "fb11ea62-5c93-4e7a-8c64-bc843b862244"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.521911 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fb11ea62-5c93-4e7a-8c64-bc843b862244" (UID: "fb11ea62-5c93-4e7a-8c64-bc843b862244"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.591258 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.591300 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx9z5\" (UniqueName: \"kubernetes.io/projected/fb11ea62-5c93-4e7a-8c64-bc843b862244-kube-api-access-wx9z5\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.591314 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fb11ea62-5c93-4e7a-8c64-bc843b862244-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.896245 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-networker-4fc87" event={"ID":"fb11ea62-5c93-4e7a-8c64-bc843b862244","Type":"ContainerDied","Data":"763e57b0e664c63d29db91a83c6130c5a90d29a27ce117f5a2222c6faa8034a6"} Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.896308 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763e57b0e664c63d29db91a83c6130c5a90d29a27ce117f5a2222c6faa8034a6" Nov 23 09:14:07 crc kubenswrapper[5028]: I1123 09:14:07.896376 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-networker-4fc87" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.006188 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-networker-qgscq"] Nov 23 09:14:08 crc kubenswrapper[5028]: E1123 09:14:08.006730 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb11ea62-5c93-4e7a-8c64-bc843b862244" containerName="validate-network-openstack-openstack-networker" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.006754 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb11ea62-5c93-4e7a-8c64-bc843b862244" containerName="validate-network-openstack-openstack-networker" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.007082 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb11ea62-5c93-4e7a-8c64-bc843b862244" containerName="validate-network-openstack-openstack-networker" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.008382 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.011285 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.011713 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.076417 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-networker-qgscq"] Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.105719 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-inventory\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.105817 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-ssh-key\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.105971 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jqz\" (UniqueName: \"kubernetes.io/projected/819ee5e5-ede2-4053-9199-247708921b7b-kube-api-access-98jqz\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.208359 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-inventory\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.208489 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-ssh-key\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.208610 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jqz\" (UniqueName: \"kubernetes.io/projected/819ee5e5-ede2-4053-9199-247708921b7b-kube-api-access-98jqz\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.216766 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-ssh-key\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.217510 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-inventory\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.230757 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jqz\" (UniqueName: \"kubernetes.io/projected/819ee5e5-ede2-4053-9199-247708921b7b-kube-api-access-98jqz\") pod \"install-os-openstack-openstack-networker-qgscq\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:08 crc kubenswrapper[5028]: I1123 09:14:08.377039 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:14:09 crc kubenswrapper[5028]: I1123 09:14:09.021021 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-networker-qgscq"] Nov 23 09:14:09 crc kubenswrapper[5028]: I1123 09:14:09.918390 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-qgscq" event={"ID":"819ee5e5-ede2-4053-9199-247708921b7b","Type":"ContainerStarted","Data":"afe08dd7fd8bd7be750f4e0e308648cb2fbe210d9ea86e3ae713b13912de8882"} Nov 23 09:14:09 crc kubenswrapper[5028]: I1123 09:14:09.918921 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-qgscq" event={"ID":"819ee5e5-ede2-4053-9199-247708921b7b","Type":"ContainerStarted","Data":"b97d1506d8bf729a37fb60be2e1a34e496e6a200d92ae3e91d83794f4e95bb35"} Nov 23 09:14:09 crc kubenswrapper[5028]: I1123 09:14:09.946468 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-networker-qgscq" podStartSLOduration=2.5573318 podStartE2EDuration="2.946444416s" podCreationTimestamp="2025-11-23 09:14:07 +0000 UTC" firstStartedPulling="2025-11-23 09:14:09.017764321 +0000 UTC m=+8632.715169100" lastFinishedPulling="2025-11-23 09:14:09.406876937 +0000 UTC m=+8633.104281716" observedRunningTime="2025-11-23 09:14:09.940192431 +0000 UTC m=+8633.637597220" watchObservedRunningTime="2025-11-23 09:14:09.946444416 +0000 UTC m=+8633.643849195" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.623789 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mdnqq"] Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.628154 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.638213 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mdnqq"] Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.791025 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-utilities\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.791529 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzq4f\" (UniqueName: \"kubernetes.io/projected/55b6d766-166a-4a3c-8cb4-69201f140ec0-kube-api-access-qzq4f\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.791727 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-catalog-content\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.894292 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzq4f\" (UniqueName: \"kubernetes.io/projected/55b6d766-166a-4a3c-8cb4-69201f140ec0-kube-api-access-qzq4f\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.894451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-catalog-content\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.894598 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-utilities\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.895323 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-catalog-content\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.895422 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-utilities\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.915031 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzq4f\" (UniqueName: \"kubernetes.io/projected/55b6d766-166a-4a3c-8cb4-69201f140ec0-kube-api-access-qzq4f\") pod \"community-operators-mdnqq\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:24 crc kubenswrapper[5028]: I1123 09:14:24.952831 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:25 crc kubenswrapper[5028]: I1123 09:14:25.510674 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mdnqq"] Nov 23 09:14:25 crc kubenswrapper[5028]: W1123 09:14:25.517587 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55b6d766_166a_4a3c_8cb4_69201f140ec0.slice/crio-cae5cd4adff0c9d374147a5c51af19d3c6201a9605cfae0015ddf9ec38dff8fc WatchSource:0}: Error finding container cae5cd4adff0c9d374147a5c51af19d3c6201a9605cfae0015ddf9ec38dff8fc: Status 404 returned error can't find the container with id cae5cd4adff0c9d374147a5c51af19d3c6201a9605cfae0015ddf9ec38dff8fc Nov 23 09:14:26 crc kubenswrapper[5028]: I1123 09:14:26.111273 5028 generic.go:334] "Generic (PLEG): container finished" podID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerID="095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37" exitCode=0 Nov 23 09:14:26 crc kubenswrapper[5028]: I1123 09:14:26.111396 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerDied","Data":"095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37"} Nov 23 09:14:26 crc kubenswrapper[5028]: I1123 09:14:26.111650 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerStarted","Data":"cae5cd4adff0c9d374147a5c51af19d3c6201a9605cfae0015ddf9ec38dff8fc"} Nov 23 09:14:27 crc kubenswrapper[5028]: I1123 09:14:27.125929 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerStarted","Data":"afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c"} Nov 23 09:14:28 crc kubenswrapper[5028]: I1123 09:14:28.141929 5028 generic.go:334] "Generic (PLEG): container finished" podID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerID="afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c" exitCode=0 Nov 23 09:14:28 crc kubenswrapper[5028]: I1123 09:14:28.142020 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerDied","Data":"afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c"} Nov 23 09:14:29 crc kubenswrapper[5028]: I1123 09:14:29.160564 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerStarted","Data":"52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c"} Nov 23 09:14:29 crc kubenswrapper[5028]: I1123 09:14:29.191245 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mdnqq" podStartSLOduration=2.723603613 podStartE2EDuration="5.191219416s" podCreationTimestamp="2025-11-23 09:14:24 +0000 UTC" firstStartedPulling="2025-11-23 09:14:26.114492657 +0000 UTC m=+8649.811897446" lastFinishedPulling="2025-11-23 09:14:28.58210843 +0000 UTC m=+8652.279513249" observedRunningTime="2025-11-23 09:14:29.181192357 +0000 UTC m=+8652.878597156" watchObservedRunningTime="2025-11-23 09:14:29.191219416 +0000 UTC m=+8652.888624195" Nov 23 09:14:30 crc kubenswrapper[5028]: I1123 09:14:30.946452 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:14:30 crc kubenswrapper[5028]: I1123 09:14:30.946877 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:14:34 crc kubenswrapper[5028]: I1123 09:14:34.953285 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:34 crc kubenswrapper[5028]: I1123 09:14:34.955295 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:35 crc kubenswrapper[5028]: I1123 09:14:35.006741 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:35 crc kubenswrapper[5028]: I1123 09:14:35.285588 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:35 crc kubenswrapper[5028]: I1123 09:14:35.351051 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mdnqq"] Nov 23 09:14:37 crc kubenswrapper[5028]: I1123 09:14:37.544918 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mdnqq" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="registry-server" containerID="cri-o://52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c" gracePeriod=2 Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.096903 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.145416 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-utilities\") pod \"55b6d766-166a-4a3c-8cb4-69201f140ec0\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.145831 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-catalog-content\") pod \"55b6d766-166a-4a3c-8cb4-69201f140ec0\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.145972 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzq4f\" (UniqueName: \"kubernetes.io/projected/55b6d766-166a-4a3c-8cb4-69201f140ec0-kube-api-access-qzq4f\") pod \"55b6d766-166a-4a3c-8cb4-69201f140ec0\" (UID: \"55b6d766-166a-4a3c-8cb4-69201f140ec0\") " Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.147511 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-utilities" (OuterVolumeSpecName: "utilities") pod "55b6d766-166a-4a3c-8cb4-69201f140ec0" (UID: "55b6d766-166a-4a3c-8cb4-69201f140ec0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.159364 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b6d766-166a-4a3c-8cb4-69201f140ec0-kube-api-access-qzq4f" (OuterVolumeSpecName: "kube-api-access-qzq4f") pod "55b6d766-166a-4a3c-8cb4-69201f140ec0" (UID: "55b6d766-166a-4a3c-8cb4-69201f140ec0"). InnerVolumeSpecName "kube-api-access-qzq4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.223628 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55b6d766-166a-4a3c-8cb4-69201f140ec0" (UID: "55b6d766-166a-4a3c-8cb4-69201f140ec0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.249025 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.249077 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzq4f\" (UniqueName: \"kubernetes.io/projected/55b6d766-166a-4a3c-8cb4-69201f140ec0-kube-api-access-qzq4f\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.249101 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55b6d766-166a-4a3c-8cb4-69201f140ec0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.561828 5028 generic.go:334] "Generic (PLEG): container finished" podID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerID="52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c" exitCode=0 Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.561891 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerDied","Data":"52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c"} Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.561913 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mdnqq" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.561951 5028 scope.go:117] "RemoveContainer" containerID="52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.561930 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mdnqq" event={"ID":"55b6d766-166a-4a3c-8cb4-69201f140ec0","Type":"ContainerDied","Data":"cae5cd4adff0c9d374147a5c51af19d3c6201a9605cfae0015ddf9ec38dff8fc"} Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.605273 5028 scope.go:117] "RemoveContainer" containerID="afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.611433 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mdnqq"] Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.627800 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mdnqq"] Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.649259 5028 scope.go:117] "RemoveContainer" containerID="095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.724411 5028 scope.go:117] "RemoveContainer" containerID="52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c" Nov 23 09:14:38 crc kubenswrapper[5028]: E1123 09:14:38.725161 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c\": container with ID starting with 52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c not found: ID does not exist" containerID="52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.725234 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c"} err="failed to get container status \"52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c\": rpc error: code = NotFound desc = could not find container \"52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c\": container with ID starting with 52bf15bd7ba701b9f11aae23005b6c2c42bc38835db7e5c17d317a666946430c not found: ID does not exist" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.725277 5028 scope.go:117] "RemoveContainer" containerID="afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c" Nov 23 09:14:38 crc kubenswrapper[5028]: E1123 09:14:38.725662 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c\": container with ID starting with afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c not found: ID does not exist" containerID="afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.725731 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c"} err="failed to get container status \"afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c\": rpc error: code = NotFound desc = could not find container \"afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c\": container with ID starting with afd1df684e547deb9b7551ecd33772005f2585c6ee1825e0fb7c423d8eef0d7c not found: ID does not exist" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.725781 5028 scope.go:117] "RemoveContainer" containerID="095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37" Nov 23 09:14:38 crc kubenswrapper[5028]: E1123 09:14:38.726941 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37\": container with ID starting with 095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37 not found: ID does not exist" containerID="095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37" Nov 23 09:14:38 crc kubenswrapper[5028]: I1123 09:14:38.726993 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37"} err="failed to get container status \"095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37\": rpc error: code = NotFound desc = could not find container \"095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37\": container with ID starting with 095f9634b5b2d1f4ea27d90581aadd5d7f57925e4c076c4ab550c1ae28039f37 not found: ID does not exist" Nov 23 09:14:39 crc kubenswrapper[5028]: I1123 09:14:39.072325 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" path="/var/lib/kubelet/pods/55b6d766-166a-4a3c-8cb4-69201f140ec0/volumes" Nov 23 09:14:52 crc kubenswrapper[5028]: I1123 09:14:52.719654 5028 generic.go:334] "Generic (PLEG): container finished" podID="052ccf3b-c34b-4dc5-a81a-0aeec151c343" containerID="f1246ae05ad0d35b7575e111451780567c6f3b8b810fc51d6fb3932b2ceacf21" exitCode=0 Nov 23 09:14:52 crc kubenswrapper[5028]: I1123 09:14:52.719810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" event={"ID":"052ccf3b-c34b-4dc5-a81a-0aeec151c343","Type":"ContainerDied","Data":"f1246ae05ad0d35b7575e111451780567c6f3b8b810fc51d6fb3932b2ceacf21"} Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.312364 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.391339 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftsxt\" (UniqueName: \"kubernetes.io/projected/052ccf3b-c34b-4dc5-a81a-0aeec151c343-kube-api-access-ftsxt\") pod \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.391654 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-inventory\") pod \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.391705 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ssh-key\") pod \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.391756 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ceph\") pod \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\" (UID: \"052ccf3b-c34b-4dc5-a81a-0aeec151c343\") " Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.400263 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052ccf3b-c34b-4dc5-a81a-0aeec151c343-kube-api-access-ftsxt" (OuterVolumeSpecName: "kube-api-access-ftsxt") pod "052ccf3b-c34b-4dc5-a81a-0aeec151c343" (UID: "052ccf3b-c34b-4dc5-a81a-0aeec151c343"). InnerVolumeSpecName "kube-api-access-ftsxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.404657 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ceph" (OuterVolumeSpecName: "ceph") pod "052ccf3b-c34b-4dc5-a81a-0aeec151c343" (UID: "052ccf3b-c34b-4dc5-a81a-0aeec151c343"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.424012 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "052ccf3b-c34b-4dc5-a81a-0aeec151c343" (UID: "052ccf3b-c34b-4dc5-a81a-0aeec151c343"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.449589 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-inventory" (OuterVolumeSpecName: "inventory") pod "052ccf3b-c34b-4dc5-a81a-0aeec151c343" (UID: "052ccf3b-c34b-4dc5-a81a-0aeec151c343"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.495881 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftsxt\" (UniqueName: \"kubernetes.io/projected/052ccf3b-c34b-4dc5-a81a-0aeec151c343-kube-api-access-ftsxt\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.495929 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.495941 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.495965 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/052ccf3b-c34b-4dc5-a81a-0aeec151c343-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.744287 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" event={"ID":"052ccf3b-c34b-4dc5-a81a-0aeec151c343","Type":"ContainerDied","Data":"c3f081a520fecd727b19c74d083b6b9bbecc8da98ce1b1052c66a41a96973724"} Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.744841 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f081a520fecd727b19c74d083b6b9bbecc8da98ce1b1052c66a41a96973724" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.744355 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-6zfpv" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.866367 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hqkw7"] Nov 23 09:14:54 crc kubenswrapper[5028]: E1123 09:14:54.867002 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="registry-server" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.867026 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="registry-server" Nov 23 09:14:54 crc kubenswrapper[5028]: E1123 09:14:54.867050 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="extract-content" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.867065 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="extract-content" Nov 23 09:14:54 crc kubenswrapper[5028]: E1123 09:14:54.867099 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="extract-utilities" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.867108 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="extract-utilities" Nov 23 09:14:54 crc kubenswrapper[5028]: E1123 09:14:54.867137 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052ccf3b-c34b-4dc5-a81a-0aeec151c343" containerName="configure-network-openstack-openstack-cell1" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.867150 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="052ccf3b-c34b-4dc5-a81a-0aeec151c343" containerName="configure-network-openstack-openstack-cell1" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.867421 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b6d766-166a-4a3c-8cb4-69201f140ec0" containerName="registry-server" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.867445 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="052ccf3b-c34b-4dc5-a81a-0aeec151c343" containerName="configure-network-openstack-openstack-cell1" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.868488 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.872438 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.883739 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hqkw7"] Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.890264 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.907032 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqdx7\" (UniqueName: \"kubernetes.io/projected/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-kube-api-access-pqdx7\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.907160 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ssh-key\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.907255 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-inventory\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:54 crc kubenswrapper[5028]: I1123 09:14:54.907312 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ceph\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.009437 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqdx7\" (UniqueName: \"kubernetes.io/projected/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-kube-api-access-pqdx7\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.009532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ssh-key\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.009608 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-inventory\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.009648 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ceph\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.015789 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ceph\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.016195 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-inventory\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.016537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ssh-key\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.028653 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqdx7\" (UniqueName: \"kubernetes.io/projected/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-kube-api-access-pqdx7\") pod \"validate-network-openstack-openstack-cell1-hqkw7\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.190731 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:14:55 crc kubenswrapper[5028]: I1123 09:14:55.854550 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-hqkw7"] Nov 23 09:14:56 crc kubenswrapper[5028]: I1123 09:14:56.775794 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" event={"ID":"aea6c6e2-e241-4283-bac6-1417dd1c2e8d","Type":"ContainerStarted","Data":"7599bca158a9d5a39bb99084110ebf7c46b0b4211819fe1a1fb7ad25253b1eb9"} Nov 23 09:14:56 crc kubenswrapper[5028]: I1123 09:14:56.776409 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" event={"ID":"aea6c6e2-e241-4283-bac6-1417dd1c2e8d","Type":"ContainerStarted","Data":"b650f66e1f55297efa3afed43360e505159e38d41edf140c86a791e1cc2cfbc0"} Nov 23 09:14:56 crc kubenswrapper[5028]: I1123 09:14:56.797501 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" podStartSLOduration=2.369843133 podStartE2EDuration="2.797477235s" podCreationTimestamp="2025-11-23 09:14:54 +0000 UTC" firstStartedPulling="2025-11-23 09:14:55.863661132 +0000 UTC m=+8679.561065931" lastFinishedPulling="2025-11-23 09:14:56.291295254 +0000 UTC m=+8679.988700033" observedRunningTime="2025-11-23 09:14:56.791597459 +0000 UTC m=+8680.489002278" watchObservedRunningTime="2025-11-23 09:14:56.797477235 +0000 UTC m=+8680.494882014" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.167166 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd"] Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.170664 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.175049 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.176808 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.199466 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd"] Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.285670 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfggg\" (UniqueName: \"kubernetes.io/projected/55893335-2a6d-4bfa-b107-71c03cec23bb-kube-api-access-gfggg\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.285788 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55893335-2a6d-4bfa-b107-71c03cec23bb-config-volume\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.285870 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55893335-2a6d-4bfa-b107-71c03cec23bb-secret-volume\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.388384 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfggg\" (UniqueName: \"kubernetes.io/projected/55893335-2a6d-4bfa-b107-71c03cec23bb-kube-api-access-gfggg\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.388473 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55893335-2a6d-4bfa-b107-71c03cec23bb-config-volume\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.388544 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55893335-2a6d-4bfa-b107-71c03cec23bb-secret-volume\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.389727 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55893335-2a6d-4bfa-b107-71c03cec23bb-config-volume\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.398531 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55893335-2a6d-4bfa-b107-71c03cec23bb-secret-volume\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.409945 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfggg\" (UniqueName: \"kubernetes.io/projected/55893335-2a6d-4bfa-b107-71c03cec23bb-kube-api-access-gfggg\") pod \"collect-profiles-29398155-ln9hd\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.524329 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.868787 5028 generic.go:334] "Generic (PLEG): container finished" podID="819ee5e5-ede2-4053-9199-247708921b7b" containerID="afe08dd7fd8bd7be750f4e0e308648cb2fbe210d9ea86e3ae713b13912de8882" exitCode=0 Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.869044 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-qgscq" event={"ID":"819ee5e5-ede2-4053-9199-247708921b7b","Type":"ContainerDied","Data":"afe08dd7fd8bd7be750f4e0e308648cb2fbe210d9ea86e3ae713b13912de8882"} Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.946728 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.946813 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.946882 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.948013 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fc33854a4e7d73aa92d206882e4ff95709d2d29698b9f2a99337a7d861512b5"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:15:00 crc kubenswrapper[5028]: I1123 09:15:00.948096 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://7fc33854a4e7d73aa92d206882e4ff95709d2d29698b9f2a99337a7d861512b5" gracePeriod=600 Nov 23 09:15:01 crc kubenswrapper[5028]: W1123 09:15:01.038023 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55893335_2a6d_4bfa_b107_71c03cec23bb.slice/crio-11756b6d388ce4330778dfa565b6c6c17358d2cd5da9249d65c62db8c15722a4 WatchSource:0}: Error finding container 11756b6d388ce4330778dfa565b6c6c17358d2cd5da9249d65c62db8c15722a4: Status 404 returned error can't find the container with id 11756b6d388ce4330778dfa565b6c6c17358d2cd5da9249d65c62db8c15722a4 Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.043837 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd"] Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.885446 5028 generic.go:334] "Generic (PLEG): container finished" podID="55893335-2a6d-4bfa-b107-71c03cec23bb" containerID="484243187afa840a27e7c74f47fe168592a463812b2b19cbe9a51d608feb7c35" exitCode=0 Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.885547 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" event={"ID":"55893335-2a6d-4bfa-b107-71c03cec23bb","Type":"ContainerDied","Data":"484243187afa840a27e7c74f47fe168592a463812b2b19cbe9a51d608feb7c35"} Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.886449 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" event={"ID":"55893335-2a6d-4bfa-b107-71c03cec23bb","Type":"ContainerStarted","Data":"11756b6d388ce4330778dfa565b6c6c17358d2cd5da9249d65c62db8c15722a4"} Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.891123 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="7fc33854a4e7d73aa92d206882e4ff95709d2d29698b9f2a99337a7d861512b5" exitCode=0 Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.891174 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"7fc33854a4e7d73aa92d206882e4ff95709d2d29698b9f2a99337a7d861512b5"} Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.891248 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686"} Nov 23 09:15:01 crc kubenswrapper[5028]: I1123 09:15:01.891283 5028 scope.go:117] "RemoveContainer" containerID="cfefe06c7f5ca1a77b7326fa816a26c04283a0def80fdd780aa1c8a9bf3ca705" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.719729 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.859583 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-inventory\") pod \"819ee5e5-ede2-4053-9199-247708921b7b\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.860270 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jqz\" (UniqueName: \"kubernetes.io/projected/819ee5e5-ede2-4053-9199-247708921b7b-kube-api-access-98jqz\") pod \"819ee5e5-ede2-4053-9199-247708921b7b\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.860352 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-ssh-key\") pod \"819ee5e5-ede2-4053-9199-247708921b7b\" (UID: \"819ee5e5-ede2-4053-9199-247708921b7b\") " Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.874570 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/819ee5e5-ede2-4053-9199-247708921b7b-kube-api-access-98jqz" (OuterVolumeSpecName: "kube-api-access-98jqz") pod "819ee5e5-ede2-4053-9199-247708921b7b" (UID: "819ee5e5-ede2-4053-9199-247708921b7b"). InnerVolumeSpecName "kube-api-access-98jqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.917203 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "819ee5e5-ede2-4053-9199-247708921b7b" (UID: "819ee5e5-ede2-4053-9199-247708921b7b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.917285 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-inventory" (OuterVolumeSpecName: "inventory") pod "819ee5e5-ede2-4053-9199-247708921b7b" (UID: "819ee5e5-ede2-4053-9199-247708921b7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.932252 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-networker-qgscq" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.935310 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-networker-qgscq" event={"ID":"819ee5e5-ede2-4053-9199-247708921b7b","Type":"ContainerDied","Data":"b97d1506d8bf729a37fb60be2e1a34e496e6a200d92ae3e91d83794f4e95bb35"} Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.935373 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b97d1506d8bf729a37fb60be2e1a34e496e6a200d92ae3e91d83794f4e95bb35" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.963504 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.963540 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jqz\" (UniqueName: \"kubernetes.io/projected/819ee5e5-ede2-4053-9199-247708921b7b-kube-api-access-98jqz\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:02 crc kubenswrapper[5028]: I1123 09:15:02.963550 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/819ee5e5-ede2-4053-9199-247708921b7b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.004880 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-networker-cfwxv"] Nov 23 09:15:03 crc kubenswrapper[5028]: E1123 09:15:03.006013 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="819ee5e5-ede2-4053-9199-247708921b7b" containerName="install-os-openstack-openstack-networker" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.006097 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="819ee5e5-ede2-4053-9199-247708921b7b" containerName="install-os-openstack-openstack-networker" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.006398 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="819ee5e5-ede2-4053-9199-247708921b7b" containerName="install-os-openstack-openstack-networker" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.007466 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.010318 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.013769 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.027704 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-networker-cfwxv"] Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.068779 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-inventory\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.068845 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbdqw\" (UniqueName: \"kubernetes.io/projected/bdb64a24-7e0c-451f-b487-7a17e61c0743-kube-api-access-gbdqw\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.069225 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-ssh-key\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.178757 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-ssh-key\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.179230 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-inventory\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.179341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbdqw\" (UniqueName: \"kubernetes.io/projected/bdb64a24-7e0c-451f-b487-7a17e61c0743-kube-api-access-gbdqw\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.186832 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-inventory\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.193613 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-ssh-key\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.211051 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbdqw\" (UniqueName: \"kubernetes.io/projected/bdb64a24-7e0c-451f-b487-7a17e61c0743-kube-api-access-gbdqw\") pod \"configure-os-openstack-openstack-networker-cfwxv\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.336459 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.350172 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.485918 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55893335-2a6d-4bfa-b107-71c03cec23bb-secret-volume\") pod \"55893335-2a6d-4bfa-b107-71c03cec23bb\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.486547 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55893335-2a6d-4bfa-b107-71c03cec23bb-config-volume\") pod \"55893335-2a6d-4bfa-b107-71c03cec23bb\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.486594 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfggg\" (UniqueName: \"kubernetes.io/projected/55893335-2a6d-4bfa-b107-71c03cec23bb-kube-api-access-gfggg\") pod \"55893335-2a6d-4bfa-b107-71c03cec23bb\" (UID: \"55893335-2a6d-4bfa-b107-71c03cec23bb\") " Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.487824 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55893335-2a6d-4bfa-b107-71c03cec23bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "55893335-2a6d-4bfa-b107-71c03cec23bb" (UID: "55893335-2a6d-4bfa-b107-71c03cec23bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.493024 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55893335-2a6d-4bfa-b107-71c03cec23bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "55893335-2a6d-4bfa-b107-71c03cec23bb" (UID: "55893335-2a6d-4bfa-b107-71c03cec23bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.493175 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55893335-2a6d-4bfa-b107-71c03cec23bb-kube-api-access-gfggg" (OuterVolumeSpecName: "kube-api-access-gfggg") pod "55893335-2a6d-4bfa-b107-71c03cec23bb" (UID: "55893335-2a6d-4bfa-b107-71c03cec23bb"). InnerVolumeSpecName "kube-api-access-gfggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.589854 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/55893335-2a6d-4bfa-b107-71c03cec23bb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.589901 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfggg\" (UniqueName: \"kubernetes.io/projected/55893335-2a6d-4bfa-b107-71c03cec23bb-kube-api-access-gfggg\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.589911 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/55893335-2a6d-4bfa-b107-71c03cec23bb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.960697 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-networker-cfwxv"] Nov 23 09:15:03 crc kubenswrapper[5028]: W1123 09:15:03.965077 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdb64a24_7e0c_451f_b487_7a17e61c0743.slice/crio-1cf29f3e2e7fcb4f9cd6698980ab3e6f9a2becb234518e92ec3142427a9b2ee5 WatchSource:0}: Error finding container 1cf29f3e2e7fcb4f9cd6698980ab3e6f9a2becb234518e92ec3142427a9b2ee5: Status 404 returned error can't find the container with id 1cf29f3e2e7fcb4f9cd6698980ab3e6f9a2becb234518e92ec3142427a9b2ee5 Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.977415 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" event={"ID":"55893335-2a6d-4bfa-b107-71c03cec23bb","Type":"ContainerDied","Data":"11756b6d388ce4330778dfa565b6c6c17358d2cd5da9249d65c62db8c15722a4"} Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.977503 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11756b6d388ce4330778dfa565b6c6c17358d2cd5da9249d65c62db8c15722a4" Nov 23 09:15:03 crc kubenswrapper[5028]: I1123 09:15:03.977582 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd" Nov 23 09:15:04 crc kubenswrapper[5028]: I1123 09:15:04.451030 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql"] Nov 23 09:15:04 crc kubenswrapper[5028]: I1123 09:15:04.462884 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398110-44rql"] Nov 23 09:15:04 crc kubenswrapper[5028]: I1123 09:15:04.990419 5028 generic.go:334] "Generic (PLEG): container finished" podID="aea6c6e2-e241-4283-bac6-1417dd1c2e8d" containerID="7599bca158a9d5a39bb99084110ebf7c46b0b4211819fe1a1fb7ad25253b1eb9" exitCode=0 Nov 23 09:15:04 crc kubenswrapper[5028]: I1123 09:15:04.990581 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" event={"ID":"aea6c6e2-e241-4283-bac6-1417dd1c2e8d","Type":"ContainerDied","Data":"7599bca158a9d5a39bb99084110ebf7c46b0b4211819fe1a1fb7ad25253b1eb9"} Nov 23 09:15:04 crc kubenswrapper[5028]: I1123 09:15:04.996520 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" event={"ID":"bdb64a24-7e0c-451f-b487-7a17e61c0743","Type":"ContainerStarted","Data":"29d569da4aaa6efcd855db3c4e79a186ac6ce8d70ab9e56b774056c004147551"} Nov 23 09:15:04 crc kubenswrapper[5028]: I1123 09:15:04.996644 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" event={"ID":"bdb64a24-7e0c-451f-b487-7a17e61c0743","Type":"ContainerStarted","Data":"1cf29f3e2e7fcb4f9cd6698980ab3e6f9a2becb234518e92ec3142427a9b2ee5"} Nov 23 09:15:05 crc kubenswrapper[5028]: I1123 09:15:05.044756 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" podStartSLOduration=2.5866343240000003 podStartE2EDuration="3.044727991s" podCreationTimestamp="2025-11-23 09:15:02 +0000 UTC" firstStartedPulling="2025-11-23 09:15:03.968259869 +0000 UTC m=+8687.665664648" lastFinishedPulling="2025-11-23 09:15:04.426353536 +0000 UTC m=+8688.123758315" observedRunningTime="2025-11-23 09:15:05.04068475 +0000 UTC m=+8688.738089539" watchObservedRunningTime="2025-11-23 09:15:05.044727991 +0000 UTC m=+8688.742132780" Nov 23 09:15:05 crc kubenswrapper[5028]: I1123 09:15:05.072809 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6804a411-c895-4508-8262-c197f4e649fd" path="/var/lib/kubelet/pods/6804a411-c895-4508-8262-c197f4e649fd/volumes" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.518231 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.585831 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqdx7\" (UniqueName: \"kubernetes.io/projected/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-kube-api-access-pqdx7\") pod \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.586008 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-inventory\") pod \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.586210 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ssh-key\") pod \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.587043 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ceph\") pod \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\" (UID: \"aea6c6e2-e241-4283-bac6-1417dd1c2e8d\") " Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.593033 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ceph" (OuterVolumeSpecName: "ceph") pod "aea6c6e2-e241-4283-bac6-1417dd1c2e8d" (UID: "aea6c6e2-e241-4283-bac6-1417dd1c2e8d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.596144 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-kube-api-access-pqdx7" (OuterVolumeSpecName: "kube-api-access-pqdx7") pod "aea6c6e2-e241-4283-bac6-1417dd1c2e8d" (UID: "aea6c6e2-e241-4283-bac6-1417dd1c2e8d"). InnerVolumeSpecName "kube-api-access-pqdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.626160 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-inventory" (OuterVolumeSpecName: "inventory") pod "aea6c6e2-e241-4283-bac6-1417dd1c2e8d" (UID: "aea6c6e2-e241-4283-bac6-1417dd1c2e8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.631074 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aea6c6e2-e241-4283-bac6-1417dd1c2e8d" (UID: "aea6c6e2-e241-4283-bac6-1417dd1c2e8d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.690046 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.690088 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.690100 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:06 crc kubenswrapper[5028]: I1123 09:15:06.690111 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqdx7\" (UniqueName: \"kubernetes.io/projected/aea6c6e2-e241-4283-bac6-1417dd1c2e8d-kube-api-access-pqdx7\") on node \"crc\" DevicePath \"\"" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.026440 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" event={"ID":"aea6c6e2-e241-4283-bac6-1417dd1c2e8d","Type":"ContainerDied","Data":"b650f66e1f55297efa3afed43360e505159e38d41edf140c86a791e1cc2cfbc0"} Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.026485 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-hqkw7" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.026524 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b650f66e1f55297efa3afed43360e505159e38d41edf140c86a791e1cc2cfbc0" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.104989 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-9p8rv"] Nov 23 09:15:07 crc kubenswrapper[5028]: E1123 09:15:07.105999 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea6c6e2-e241-4283-bac6-1417dd1c2e8d" containerName="validate-network-openstack-openstack-cell1" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.106083 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea6c6e2-e241-4283-bac6-1417dd1c2e8d" containerName="validate-network-openstack-openstack-cell1" Nov 23 09:15:07 crc kubenswrapper[5028]: E1123 09:15:07.106169 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55893335-2a6d-4bfa-b107-71c03cec23bb" containerName="collect-profiles" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.106239 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="55893335-2a6d-4bfa-b107-71c03cec23bb" containerName="collect-profiles" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.106525 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea6c6e2-e241-4283-bac6-1417dd1c2e8d" containerName="validate-network-openstack-openstack-cell1" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.106597 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="55893335-2a6d-4bfa-b107-71c03cec23bb" containerName="collect-profiles" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.107629 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.111161 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.111464 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.128603 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-9p8rv"] Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.205111 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ceph\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.206579 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwp6r\" (UniqueName: \"kubernetes.io/projected/0867f0cd-0d42-4b6f-826c-4ec51f20df02-kube-api-access-dwp6r\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.207411 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ssh-key\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.208832 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-inventory\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.311546 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ssh-key\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.312104 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-inventory\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.312233 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ceph\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.312269 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwp6r\" (UniqueName: \"kubernetes.io/projected/0867f0cd-0d42-4b6f-826c-4ec51f20df02-kube-api-access-dwp6r\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.323412 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ceph\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.323480 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-inventory\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.323875 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ssh-key\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.330253 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwp6r\" (UniqueName: \"kubernetes.io/projected/0867f0cd-0d42-4b6f-826c-4ec51f20df02-kube-api-access-dwp6r\") pod \"install-os-openstack-openstack-cell1-9p8rv\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:07 crc kubenswrapper[5028]: I1123 09:15:07.471186 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:15:08 crc kubenswrapper[5028]: I1123 09:15:08.063267 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-9p8rv"] Nov 23 09:15:08 crc kubenswrapper[5028]: W1123 09:15:08.070595 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0867f0cd_0d42_4b6f_826c_4ec51f20df02.slice/crio-03d971d3db7de22068d799e8534894c3cf6cd9c7295f89498acdaecb5d1d3bc2 WatchSource:0}: Error finding container 03d971d3db7de22068d799e8534894c3cf6cd9c7295f89498acdaecb5d1d3bc2: Status 404 returned error can't find the container with id 03d971d3db7de22068d799e8534894c3cf6cd9c7295f89498acdaecb5d1d3bc2 Nov 23 09:15:09 crc kubenswrapper[5028]: I1123 09:15:09.048093 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" event={"ID":"0867f0cd-0d42-4b6f-826c-4ec51f20df02","Type":"ContainerStarted","Data":"77f192d8312e596a9400535eba9294830fe093d352b8f7a8d6cd5378fb074f4e"} Nov 23 09:15:09 crc kubenswrapper[5028]: I1123 09:15:09.048894 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" event={"ID":"0867f0cd-0d42-4b6f-826c-4ec51f20df02","Type":"ContainerStarted","Data":"03d971d3db7de22068d799e8534894c3cf6cd9c7295f89498acdaecb5d1d3bc2"} Nov 23 09:15:09 crc kubenswrapper[5028]: I1123 09:15:09.094488 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" podStartSLOduration=1.6265447339999999 podStartE2EDuration="2.094454405s" podCreationTimestamp="2025-11-23 09:15:07 +0000 UTC" firstStartedPulling="2025-11-23 09:15:08.07431256 +0000 UTC m=+8691.771717359" lastFinishedPulling="2025-11-23 09:15:08.542222251 +0000 UTC m=+8692.239627030" observedRunningTime="2025-11-23 09:15:09.070413948 +0000 UTC m=+8692.767818737" watchObservedRunningTime="2025-11-23 09:15:09.094454405 +0000 UTC m=+8692.791859184" Nov 23 09:15:32 crc kubenswrapper[5028]: I1123 09:15:32.357257 5028 scope.go:117] "RemoveContainer" containerID="5da2659e3fd1737d19e7d6f3e4085af443dcc6167c8efb57c9301f8c080b9af0" Nov 23 09:16:00 crc kubenswrapper[5028]: I1123 09:16:00.710444 5028 generic.go:334] "Generic (PLEG): container finished" podID="bdb64a24-7e0c-451f-b487-7a17e61c0743" containerID="29d569da4aaa6efcd855db3c4e79a186ac6ce8d70ab9e56b774056c004147551" exitCode=0 Nov 23 09:16:00 crc kubenswrapper[5028]: I1123 09:16:00.710721 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" event={"ID":"bdb64a24-7e0c-451f-b487-7a17e61c0743","Type":"ContainerDied","Data":"29d569da4aaa6efcd855db3c4e79a186ac6ce8d70ab9e56b774056c004147551"} Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.267476 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.371452 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-inventory\") pod \"bdb64a24-7e0c-451f-b487-7a17e61c0743\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.371507 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbdqw\" (UniqueName: \"kubernetes.io/projected/bdb64a24-7e0c-451f-b487-7a17e61c0743-kube-api-access-gbdqw\") pod \"bdb64a24-7e0c-451f-b487-7a17e61c0743\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.371528 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-ssh-key\") pod \"bdb64a24-7e0c-451f-b487-7a17e61c0743\" (UID: \"bdb64a24-7e0c-451f-b487-7a17e61c0743\") " Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.386345 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb64a24-7e0c-451f-b487-7a17e61c0743-kube-api-access-gbdqw" (OuterVolumeSpecName: "kube-api-access-gbdqw") pod "bdb64a24-7e0c-451f-b487-7a17e61c0743" (UID: "bdb64a24-7e0c-451f-b487-7a17e61c0743"). InnerVolumeSpecName "kube-api-access-gbdqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.403883 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-inventory" (OuterVolumeSpecName: "inventory") pod "bdb64a24-7e0c-451f-b487-7a17e61c0743" (UID: "bdb64a24-7e0c-451f-b487-7a17e61c0743"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.414578 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bdb64a24-7e0c-451f-b487-7a17e61c0743" (UID: "bdb64a24-7e0c-451f-b487-7a17e61c0743"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.475359 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.475419 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbdqw\" (UniqueName: \"kubernetes.io/projected/bdb64a24-7e0c-451f-b487-7a17e61c0743-kube-api-access-gbdqw\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.475436 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdb64a24-7e0c-451f-b487-7a17e61c0743-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.740251 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" event={"ID":"bdb64a24-7e0c-451f-b487-7a17e61c0743","Type":"ContainerDied","Data":"1cf29f3e2e7fcb4f9cd6698980ab3e6f9a2becb234518e92ec3142427a9b2ee5"} Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.740309 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf29f3e2e7fcb4f9cd6698980ab3e6f9a2becb234518e92ec3142427a9b2ee5" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.740352 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-networker-cfwxv" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.867482 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-networker-jpws6"] Nov 23 09:16:02 crc kubenswrapper[5028]: E1123 09:16:02.868126 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb64a24-7e0c-451f-b487-7a17e61c0743" containerName="configure-os-openstack-openstack-networker" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.868150 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb64a24-7e0c-451f-b487-7a17e61c0743" containerName="configure-os-openstack-openstack-networker" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.868410 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb64a24-7e0c-451f-b487-7a17e61c0743" containerName="configure-os-openstack-openstack-networker" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.869489 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.873975 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.874313 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.886225 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-networker-jpws6"] Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.886842 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66jfb\" (UniqueName: \"kubernetes.io/projected/a8249946-f816-404d-bc42-ad98c813df1e-kube-api-access-66jfb\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.886917 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-ssh-key\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.886978 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-inventory\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.989705 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66jfb\" (UniqueName: \"kubernetes.io/projected/a8249946-f816-404d-bc42-ad98c813df1e-kube-api-access-66jfb\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.989808 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-ssh-key\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:02 crc kubenswrapper[5028]: I1123 09:16:02.989876 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-inventory\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:03 crc kubenswrapper[5028]: I1123 09:16:02.996939 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-ssh-key\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:03 crc kubenswrapper[5028]: I1123 09:16:02.998384 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-inventory\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:03 crc kubenswrapper[5028]: I1123 09:16:03.015438 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66jfb\" (UniqueName: \"kubernetes.io/projected/a8249946-f816-404d-bc42-ad98c813df1e-kube-api-access-66jfb\") pod \"run-os-openstack-openstack-networker-jpws6\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:03 crc kubenswrapper[5028]: I1123 09:16:03.208012 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:03 crc kubenswrapper[5028]: I1123 09:16:03.824718 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-networker-jpws6"] Nov 23 09:16:04 crc kubenswrapper[5028]: I1123 09:16:04.764728 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-jpws6" event={"ID":"a8249946-f816-404d-bc42-ad98c813df1e","Type":"ContainerStarted","Data":"e6414315857efece773d66b88857ff46acbc1786e6b51450131c5c90a3225fff"} Nov 23 09:16:04 crc kubenswrapper[5028]: I1123 09:16:04.765264 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-jpws6" event={"ID":"a8249946-f816-404d-bc42-ad98c813df1e","Type":"ContainerStarted","Data":"db11634f1495747e0de325faf6084322fa1310a7ab02f25db5d26fa3f31c9205"} Nov 23 09:16:04 crc kubenswrapper[5028]: I1123 09:16:04.793447 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-networker-jpws6" podStartSLOduration=2.351341454 podStartE2EDuration="2.793411164s" podCreationTimestamp="2025-11-23 09:16:02 +0000 UTC" firstStartedPulling="2025-11-23 09:16:03.827448164 +0000 UTC m=+8747.524852943" lastFinishedPulling="2025-11-23 09:16:04.269517874 +0000 UTC m=+8747.966922653" observedRunningTime="2025-11-23 09:16:04.792394159 +0000 UTC m=+8748.489798938" watchObservedRunningTime="2025-11-23 09:16:04.793411164 +0000 UTC m=+8748.490815943" Nov 23 09:16:08 crc kubenswrapper[5028]: I1123 09:16:08.812597 5028 generic.go:334] "Generic (PLEG): container finished" podID="0867f0cd-0d42-4b6f-826c-4ec51f20df02" containerID="77f192d8312e596a9400535eba9294830fe093d352b8f7a8d6cd5378fb074f4e" exitCode=0 Nov 23 09:16:08 crc kubenswrapper[5028]: I1123 09:16:08.812886 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" event={"ID":"0867f0cd-0d42-4b6f-826c-4ec51f20df02","Type":"ContainerDied","Data":"77f192d8312e596a9400535eba9294830fe093d352b8f7a8d6cd5378fb074f4e"} Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.293836 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.403370 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ssh-key\") pod \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.403593 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwp6r\" (UniqueName: \"kubernetes.io/projected/0867f0cd-0d42-4b6f-826c-4ec51f20df02-kube-api-access-dwp6r\") pod \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.403730 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-inventory\") pod \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.403831 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ceph\") pod \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\" (UID: \"0867f0cd-0d42-4b6f-826c-4ec51f20df02\") " Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.412226 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0867f0cd-0d42-4b6f-826c-4ec51f20df02-kube-api-access-dwp6r" (OuterVolumeSpecName: "kube-api-access-dwp6r") pod "0867f0cd-0d42-4b6f-826c-4ec51f20df02" (UID: "0867f0cd-0d42-4b6f-826c-4ec51f20df02"). InnerVolumeSpecName "kube-api-access-dwp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.417135 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ceph" (OuterVolumeSpecName: "ceph") pod "0867f0cd-0d42-4b6f-826c-4ec51f20df02" (UID: "0867f0cd-0d42-4b6f-826c-4ec51f20df02"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.448753 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-inventory" (OuterVolumeSpecName: "inventory") pod "0867f0cd-0d42-4b6f-826c-4ec51f20df02" (UID: "0867f0cd-0d42-4b6f-826c-4ec51f20df02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.449939 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0867f0cd-0d42-4b6f-826c-4ec51f20df02" (UID: "0867f0cd-0d42-4b6f-826c-4ec51f20df02"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.511199 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwp6r\" (UniqueName: \"kubernetes.io/projected/0867f0cd-0d42-4b6f-826c-4ec51f20df02-kube-api-access-dwp6r\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.511248 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.511262 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.511274 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0867f0cd-0d42-4b6f-826c-4ec51f20df02-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.838742 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" event={"ID":"0867f0cd-0d42-4b6f-826c-4ec51f20df02","Type":"ContainerDied","Data":"03d971d3db7de22068d799e8534894c3cf6cd9c7295f89498acdaecb5d1d3bc2"} Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.839143 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d971d3db7de22068d799e8534894c3cf6cd9c7295f89498acdaecb5d1d3bc2" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.838865 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-9p8rv" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.933793 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-kjpnf"] Nov 23 09:16:10 crc kubenswrapper[5028]: E1123 09:16:10.934798 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0867f0cd-0d42-4b6f-826c-4ec51f20df02" containerName="install-os-openstack-openstack-cell1" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.934831 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="0867f0cd-0d42-4b6f-826c-4ec51f20df02" containerName="install-os-openstack-openstack-cell1" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.935244 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="0867f0cd-0d42-4b6f-826c-4ec51f20df02" containerName="install-os-openstack-openstack-cell1" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.937048 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.940140 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.945251 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-kjpnf"] Nov 23 09:16:10 crc kubenswrapper[5028]: I1123 09:16:10.947042 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.024885 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2qhp\" (UniqueName: \"kubernetes.io/projected/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-kube-api-access-l2qhp\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.025187 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ceph\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.025363 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ssh-key\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.025465 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-inventory\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.127477 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-inventory\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.127624 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2qhp\" (UniqueName: \"kubernetes.io/projected/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-kube-api-access-l2qhp\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.127749 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ceph\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.128846 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ssh-key\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.132797 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-inventory\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.135455 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ssh-key\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.141494 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ceph\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.159782 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2qhp\" (UniqueName: \"kubernetes.io/projected/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-kube-api-access-l2qhp\") pod \"configure-os-openstack-openstack-cell1-kjpnf\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:11 crc kubenswrapper[5028]: I1123 09:16:11.260737 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:16:12 crc kubenswrapper[5028]: I1123 09:16:12.011332 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-kjpnf"] Nov 23 09:16:12 crc kubenswrapper[5028]: I1123 09:16:12.910646 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" event={"ID":"7ee6d0e6-9f18-4e06-8334-685ed129a0c4","Type":"ContainerStarted","Data":"0989a4a00b834cb774373a6427806384a45f536ff57ed2fe69d7e7820b224935"} Nov 23 09:16:12 crc kubenswrapper[5028]: I1123 09:16:12.911169 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" event={"ID":"7ee6d0e6-9f18-4e06-8334-685ed129a0c4","Type":"ContainerStarted","Data":"c04ee806fca431d55a87ac854d0a6cfd24f5678e2e5d042ca4c7ccc9d8ae36be"} Nov 23 09:16:12 crc kubenswrapper[5028]: I1123 09:16:12.937411 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" podStartSLOduration=2.461276512 podStartE2EDuration="2.937392516s" podCreationTimestamp="2025-11-23 09:16:10 +0000 UTC" firstStartedPulling="2025-11-23 09:16:12.016880004 +0000 UTC m=+8755.714284783" lastFinishedPulling="2025-11-23 09:16:12.492995978 +0000 UTC m=+8756.190400787" observedRunningTime="2025-11-23 09:16:12.930493555 +0000 UTC m=+8756.627898344" watchObservedRunningTime="2025-11-23 09:16:12.937392516 +0000 UTC m=+8756.634797295" Nov 23 09:16:16 crc kubenswrapper[5028]: I1123 09:16:16.956353 5028 generic.go:334] "Generic (PLEG): container finished" podID="a8249946-f816-404d-bc42-ad98c813df1e" containerID="e6414315857efece773d66b88857ff46acbc1786e6b51450131c5c90a3225fff" exitCode=0 Nov 23 09:16:16 crc kubenswrapper[5028]: I1123 09:16:16.957116 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-jpws6" event={"ID":"a8249946-f816-404d-bc42-ad98c813df1e","Type":"ContainerDied","Data":"e6414315857efece773d66b88857ff46acbc1786e6b51450131c5c90a3225fff"} Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.450083 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.553825 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-ssh-key\") pod \"a8249946-f816-404d-bc42-ad98c813df1e\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.553929 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-inventory\") pod \"a8249946-f816-404d-bc42-ad98c813df1e\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.554023 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66jfb\" (UniqueName: \"kubernetes.io/projected/a8249946-f816-404d-bc42-ad98c813df1e-kube-api-access-66jfb\") pod \"a8249946-f816-404d-bc42-ad98c813df1e\" (UID: \"a8249946-f816-404d-bc42-ad98c813df1e\") " Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.563172 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8249946-f816-404d-bc42-ad98c813df1e-kube-api-access-66jfb" (OuterVolumeSpecName: "kube-api-access-66jfb") pod "a8249946-f816-404d-bc42-ad98c813df1e" (UID: "a8249946-f816-404d-bc42-ad98c813df1e"). InnerVolumeSpecName "kube-api-access-66jfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.585757 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-inventory" (OuterVolumeSpecName: "inventory") pod "a8249946-f816-404d-bc42-ad98c813df1e" (UID: "a8249946-f816-404d-bc42-ad98c813df1e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.593710 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a8249946-f816-404d-bc42-ad98c813df1e" (UID: "a8249946-f816-404d-bc42-ad98c813df1e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.656938 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.657117 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8249946-f816-404d-bc42-ad98c813df1e-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.657204 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66jfb\" (UniqueName: \"kubernetes.io/projected/a8249946-f816-404d-bc42-ad98c813df1e-kube-api-access-66jfb\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.982347 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-networker-jpws6" event={"ID":"a8249946-f816-404d-bc42-ad98c813df1e","Type":"ContainerDied","Data":"db11634f1495747e0de325faf6084322fa1310a7ab02f25db5d26fa3f31c9205"} Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.982408 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db11634f1495747e0de325faf6084322fa1310a7ab02f25db5d26fa3f31c9205" Nov 23 09:16:18 crc kubenswrapper[5028]: I1123 09:16:18.982829 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-networker-jpws6" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.071200 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-whpjr"] Nov 23 09:16:19 crc kubenswrapper[5028]: E1123 09:16:19.071670 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8249946-f816-404d-bc42-ad98c813df1e" containerName="run-os-openstack-openstack-networker" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.071701 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8249946-f816-404d-bc42-ad98c813df1e" containerName="run-os-openstack-openstack-networker" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.071996 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8249946-f816-404d-bc42-ad98c813df1e" containerName="run-os-openstack-openstack-networker" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.072856 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.076617 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.076992 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.100286 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-whpjr"] Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.168541 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzkj\" (UniqueName: \"kubernetes.io/projected/683efa16-72c3-46fd-a5b8-82b41754468e-kube-api-access-9wzkj\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.168627 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-inventory\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.168962 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-ssh-key\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.271056 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-inventory\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.271322 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-ssh-key\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.271423 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wzkj\" (UniqueName: \"kubernetes.io/projected/683efa16-72c3-46fd-a5b8-82b41754468e-kube-api-access-9wzkj\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.453838 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-inventory\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.455166 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wzkj\" (UniqueName: \"kubernetes.io/projected/683efa16-72c3-46fd-a5b8-82b41754468e-kube-api-access-9wzkj\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.456633 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-ssh-key\") pod \"reboot-os-openstack-openstack-networker-whpjr\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:19 crc kubenswrapper[5028]: I1123 09:16:19.691927 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:20 crc kubenswrapper[5028]: I1123 09:16:20.301785 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-networker-whpjr"] Nov 23 09:16:21 crc kubenswrapper[5028]: I1123 09:16:21.006721 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" event={"ID":"683efa16-72c3-46fd-a5b8-82b41754468e","Type":"ContainerStarted","Data":"ff3da520e697bd5383189c6f8f799e3d4a062edad37dc58cb870344228ec8266"} Nov 23 09:16:22 crc kubenswrapper[5028]: I1123 09:16:22.030307 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" event={"ID":"683efa16-72c3-46fd-a5b8-82b41754468e","Type":"ContainerStarted","Data":"8dda2130c7bc734d74da158cc21db8cbc965358def012bcb0c8ec6c199bdae24"} Nov 23 09:16:22 crc kubenswrapper[5028]: I1123 09:16:22.061992 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" podStartSLOduration=2.635451379 podStartE2EDuration="3.061964233s" podCreationTimestamp="2025-11-23 09:16:19 +0000 UTC" firstStartedPulling="2025-11-23 09:16:20.31575919 +0000 UTC m=+8764.013163969" lastFinishedPulling="2025-11-23 09:16:20.742272044 +0000 UTC m=+8764.439676823" observedRunningTime="2025-11-23 09:16:22.04816969 +0000 UTC m=+8765.745574469" watchObservedRunningTime="2025-11-23 09:16:22.061964233 +0000 UTC m=+8765.759369012" Nov 23 09:16:39 crc kubenswrapper[5028]: I1123 09:16:39.317162 5028 generic.go:334] "Generic (PLEG): container finished" podID="683efa16-72c3-46fd-a5b8-82b41754468e" containerID="8dda2130c7bc734d74da158cc21db8cbc965358def012bcb0c8ec6c199bdae24" exitCode=0 Nov 23 09:16:39 crc kubenswrapper[5028]: I1123 09:16:39.318038 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" event={"ID":"683efa16-72c3-46fd-a5b8-82b41754468e","Type":"ContainerDied","Data":"8dda2130c7bc734d74da158cc21db8cbc965358def012bcb0c8ec6c199bdae24"} Nov 23 09:16:40 crc kubenswrapper[5028]: I1123 09:16:40.894002 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.061448 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-ssh-key\") pod \"683efa16-72c3-46fd-a5b8-82b41754468e\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.061557 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-inventory\") pod \"683efa16-72c3-46fd-a5b8-82b41754468e\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.061658 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wzkj\" (UniqueName: \"kubernetes.io/projected/683efa16-72c3-46fd-a5b8-82b41754468e-kube-api-access-9wzkj\") pod \"683efa16-72c3-46fd-a5b8-82b41754468e\" (UID: \"683efa16-72c3-46fd-a5b8-82b41754468e\") " Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.068810 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683efa16-72c3-46fd-a5b8-82b41754468e-kube-api-access-9wzkj" (OuterVolumeSpecName: "kube-api-access-9wzkj") pod "683efa16-72c3-46fd-a5b8-82b41754468e" (UID: "683efa16-72c3-46fd-a5b8-82b41754468e"). InnerVolumeSpecName "kube-api-access-9wzkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.100559 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-inventory" (OuterVolumeSpecName: "inventory") pod "683efa16-72c3-46fd-a5b8-82b41754468e" (UID: "683efa16-72c3-46fd-a5b8-82b41754468e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.119928 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "683efa16-72c3-46fd-a5b8-82b41754468e" (UID: "683efa16-72c3-46fd-a5b8-82b41754468e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.164972 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.165492 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wzkj\" (UniqueName: \"kubernetes.io/projected/683efa16-72c3-46fd-a5b8-82b41754468e-kube-api-access-9wzkj\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.165505 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/683efa16-72c3-46fd-a5b8-82b41754468e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.366308 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" event={"ID":"683efa16-72c3-46fd-a5b8-82b41754468e","Type":"ContainerDied","Data":"ff3da520e697bd5383189c6f8f799e3d4a062edad37dc58cb870344228ec8266"} Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.366494 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3da520e697bd5383189c6f8f799e3d4a062edad37dc58cb870344228ec8266" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.366712 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-networker-whpjr" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.481754 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-networker-5p94z"] Nov 23 09:16:41 crc kubenswrapper[5028]: E1123 09:16:41.482454 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683efa16-72c3-46fd-a5b8-82b41754468e" containerName="reboot-os-openstack-openstack-networker" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.482471 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="683efa16-72c3-46fd-a5b8-82b41754468e" containerName="reboot-os-openstack-openstack-networker" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.482712 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="683efa16-72c3-46fd-a5b8-82b41754468e" containerName="reboot-os-openstack-openstack-networker" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.483727 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.487221 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.487465 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.515116 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-networker-5p94z"] Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.585336 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.585532 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ssh-key\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.585963 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-inventory\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.586298 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptjc\" (UniqueName: \"kubernetes.io/projected/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-kube-api-access-6ptjc\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.586396 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.586474 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.688604 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-inventory\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.688722 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptjc\" (UniqueName: \"kubernetes.io/projected/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-kube-api-access-6ptjc\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.688759 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.688792 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.688837 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.688896 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ssh-key\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.696213 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-inventory\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.696235 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.697189 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ssh-key\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.697661 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.697736 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.707636 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptjc\" (UniqueName: \"kubernetes.io/projected/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-kube-api-access-6ptjc\") pod \"install-certs-openstack-openstack-networker-5p94z\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:41 crc kubenswrapper[5028]: I1123 09:16:41.823717 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:42 crc kubenswrapper[5028]: I1123 09:16:42.375884 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-networker-5p94z"] Nov 23 09:16:42 crc kubenswrapper[5028]: W1123 09:16:42.380549 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod435a8ca8_e3b4_46e7_83f5_78080ddeeb67.slice/crio-42792bc85237a2960447b7370445c2446dbe979b35084aca2f9f76fe8df69f10 WatchSource:0}: Error finding container 42792bc85237a2960447b7370445c2446dbe979b35084aca2f9f76fe8df69f10: Status 404 returned error can't find the container with id 42792bc85237a2960447b7370445c2446dbe979b35084aca2f9f76fe8df69f10 Nov 23 09:16:43 crc kubenswrapper[5028]: I1123 09:16:43.391713 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-5p94z" event={"ID":"435a8ca8-e3b4-46e7-83f5-78080ddeeb67","Type":"ContainerStarted","Data":"0e1df72a37c26baff7478b199cf490ddd592cddd97a419bdc1e72158234c6c04"} Nov 23 09:16:43 crc kubenswrapper[5028]: I1123 09:16:43.392241 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-5p94z" event={"ID":"435a8ca8-e3b4-46e7-83f5-78080ddeeb67","Type":"ContainerStarted","Data":"42792bc85237a2960447b7370445c2446dbe979b35084aca2f9f76fe8df69f10"} Nov 23 09:16:43 crc kubenswrapper[5028]: I1123 09:16:43.414564 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-networker-5p94z" podStartSLOduration=1.949796773 podStartE2EDuration="2.414539476s" podCreationTimestamp="2025-11-23 09:16:41 +0000 UTC" firstStartedPulling="2025-11-23 09:16:42.383397438 +0000 UTC m=+8786.080802217" lastFinishedPulling="2025-11-23 09:16:42.848140101 +0000 UTC m=+8786.545544920" observedRunningTime="2025-11-23 09:16:43.410919497 +0000 UTC m=+8787.108324276" watchObservedRunningTime="2025-11-23 09:16:43.414539476 +0000 UTC m=+8787.111944255" Nov 23 09:16:54 crc kubenswrapper[5028]: I1123 09:16:54.507578 5028 generic.go:334] "Generic (PLEG): container finished" podID="435a8ca8-e3b4-46e7-83f5-78080ddeeb67" containerID="0e1df72a37c26baff7478b199cf490ddd592cddd97a419bdc1e72158234c6c04" exitCode=0 Nov 23 09:16:54 crc kubenswrapper[5028]: I1123 09:16:54.507690 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-5p94z" event={"ID":"435a8ca8-e3b4-46e7-83f5-78080ddeeb67","Type":"ContainerDied","Data":"0e1df72a37c26baff7478b199cf490ddd592cddd97a419bdc1e72158234c6c04"} Nov 23 09:16:55 crc kubenswrapper[5028]: I1123 09:16:55.981764 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.026080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ptjc\" (UniqueName: \"kubernetes.io/projected/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-kube-api-access-6ptjc\") pod \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.026151 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-bootstrap-combined-ca-bundle\") pod \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.026245 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-inventory\") pod \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.026360 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ovn-combined-ca-bundle\") pod \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.026422 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-neutron-metadata-combined-ca-bundle\") pod \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.026569 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ssh-key\") pod \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\" (UID: \"435a8ca8-e3b4-46e7-83f5-78080ddeeb67\") " Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.033135 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-kube-api-access-6ptjc" (OuterVolumeSpecName: "kube-api-access-6ptjc") pod "435a8ca8-e3b4-46e7-83f5-78080ddeeb67" (UID: "435a8ca8-e3b4-46e7-83f5-78080ddeeb67"). InnerVolumeSpecName "kube-api-access-6ptjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.033907 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "435a8ca8-e3b4-46e7-83f5-78080ddeeb67" (UID: "435a8ca8-e3b4-46e7-83f5-78080ddeeb67"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.034302 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "435a8ca8-e3b4-46e7-83f5-78080ddeeb67" (UID: "435a8ca8-e3b4-46e7-83f5-78080ddeeb67"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.034852 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "435a8ca8-e3b4-46e7-83f5-78080ddeeb67" (UID: "435a8ca8-e3b4-46e7-83f5-78080ddeeb67"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.060922 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-inventory" (OuterVolumeSpecName: "inventory") pod "435a8ca8-e3b4-46e7-83f5-78080ddeeb67" (UID: "435a8ca8-e3b4-46e7-83f5-78080ddeeb67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.079223 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "435a8ca8-e3b4-46e7-83f5-78080ddeeb67" (UID: "435a8ca8-e3b4-46e7-83f5-78080ddeeb67"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.128321 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ptjc\" (UniqueName: \"kubernetes.io/projected/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-kube-api-access-6ptjc\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.128352 5028 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.128364 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.128374 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.128382 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.128392 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/435a8ca8-e3b4-46e7-83f5-78080ddeeb67-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.539653 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-networker-5p94z" event={"ID":"435a8ca8-e3b4-46e7-83f5-78080ddeeb67","Type":"ContainerDied","Data":"42792bc85237a2960447b7370445c2446dbe979b35084aca2f9f76fe8df69f10"} Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.540206 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42792bc85237a2960447b7370445c2446dbe979b35084aca2f9f76fe8df69f10" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.540293 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-networker-5p94z" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.624236 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-networker-v4srx"] Nov 23 09:16:56 crc kubenswrapper[5028]: E1123 09:16:56.624740 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435a8ca8-e3b4-46e7-83f5-78080ddeeb67" containerName="install-certs-openstack-openstack-networker" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.624761 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="435a8ca8-e3b4-46e7-83f5-78080ddeeb67" containerName="install-certs-openstack-openstack-networker" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.625058 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="435a8ca8-e3b4-46e7-83f5-78080ddeeb67" containerName="install-certs-openstack-openstack-networker" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.626086 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.629274 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.629369 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.629815 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.636307 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-networker-v4srx"] Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.643630 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.643709 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ssh-key\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.643739 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-inventory\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.643814 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9cb79473-ee84-4aef-b25e-81acc20abf95-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.643841 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvhg\" (UniqueName: \"kubernetes.io/projected/9cb79473-ee84-4aef-b25e-81acc20abf95-kube-api-access-ltvhg\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.746260 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.746335 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ssh-key\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.746367 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-inventory\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.746434 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9cb79473-ee84-4aef-b25e-81acc20abf95-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.746460 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvhg\" (UniqueName: \"kubernetes.io/projected/9cb79473-ee84-4aef-b25e-81acc20abf95-kube-api-access-ltvhg\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.748590 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9cb79473-ee84-4aef-b25e-81acc20abf95-ovncontroller-config-0\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.753368 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ssh-key\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.753401 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-inventory\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.754052 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.765223 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvhg\" (UniqueName: \"kubernetes.io/projected/9cb79473-ee84-4aef-b25e-81acc20abf95-kube-api-access-ltvhg\") pod \"ovn-openstack-openstack-networker-v4srx\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:56 crc kubenswrapper[5028]: I1123 09:16:56.952639 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:16:57 crc kubenswrapper[5028]: I1123 09:16:57.553324 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-networker-v4srx"] Nov 23 09:16:58 crc kubenswrapper[5028]: I1123 09:16:58.563110 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-v4srx" event={"ID":"9cb79473-ee84-4aef-b25e-81acc20abf95","Type":"ContainerStarted","Data":"24ab438915057fd6b3e5afa952641d275808b45c329d4af89abe563f4d5b039f"} Nov 23 09:16:59 crc kubenswrapper[5028]: I1123 09:16:59.589202 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-v4srx" event={"ID":"9cb79473-ee84-4aef-b25e-81acc20abf95","Type":"ContainerStarted","Data":"88a833a99f922cf24d028f2275130b4ece6eb02d7868e790ea503be6963b31dd"} Nov 23 09:16:59 crc kubenswrapper[5028]: I1123 09:16:59.612893 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-networker-v4srx" podStartSLOduration=2.887853287 podStartE2EDuration="3.612875728s" podCreationTimestamp="2025-11-23 09:16:56 +0000 UTC" firstStartedPulling="2025-11-23 09:16:57.56460426 +0000 UTC m=+8801.262009039" lastFinishedPulling="2025-11-23 09:16:58.289626681 +0000 UTC m=+8801.987031480" observedRunningTime="2025-11-23 09:16:59.611165065 +0000 UTC m=+8803.308569844" watchObservedRunningTime="2025-11-23 09:16:59.612875728 +0000 UTC m=+8803.310280507" Nov 23 09:17:09 crc kubenswrapper[5028]: I1123 09:17:09.737222 5028 generic.go:334] "Generic (PLEG): container finished" podID="7ee6d0e6-9f18-4e06-8334-685ed129a0c4" containerID="0989a4a00b834cb774373a6427806384a45f536ff57ed2fe69d7e7820b224935" exitCode=0 Nov 23 09:17:09 crc kubenswrapper[5028]: I1123 09:17:09.737300 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" event={"ID":"7ee6d0e6-9f18-4e06-8334-685ed129a0c4","Type":"ContainerDied","Data":"0989a4a00b834cb774373a6427806384a45f536ff57ed2fe69d7e7820b224935"} Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.213999 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.324537 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-inventory\") pod \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.324611 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2qhp\" (UniqueName: \"kubernetes.io/projected/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-kube-api-access-l2qhp\") pod \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.324781 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ceph\") pod \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.324825 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ssh-key\") pod \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\" (UID: \"7ee6d0e6-9f18-4e06-8334-685ed129a0c4\") " Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.332852 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ceph" (OuterVolumeSpecName: "ceph") pod "7ee6d0e6-9f18-4e06-8334-685ed129a0c4" (UID: "7ee6d0e6-9f18-4e06-8334-685ed129a0c4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.333638 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-kube-api-access-l2qhp" (OuterVolumeSpecName: "kube-api-access-l2qhp") pod "7ee6d0e6-9f18-4e06-8334-685ed129a0c4" (UID: "7ee6d0e6-9f18-4e06-8334-685ed129a0c4"). InnerVolumeSpecName "kube-api-access-l2qhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.362420 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7ee6d0e6-9f18-4e06-8334-685ed129a0c4" (UID: "7ee6d0e6-9f18-4e06-8334-685ed129a0c4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.385914 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-inventory" (OuterVolumeSpecName: "inventory") pod "7ee6d0e6-9f18-4e06-8334-685ed129a0c4" (UID: "7ee6d0e6-9f18-4e06-8334-685ed129a0c4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.428304 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.428338 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.428352 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.428411 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2qhp\" (UniqueName: \"kubernetes.io/projected/7ee6d0e6-9f18-4e06-8334-685ed129a0c4-kube-api-access-l2qhp\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.765004 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" event={"ID":"7ee6d0e6-9f18-4e06-8334-685ed129a0c4","Type":"ContainerDied","Data":"c04ee806fca431d55a87ac854d0a6cfd24f5678e2e5d042ca4c7ccc9d8ae36be"} Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.765604 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c04ee806fca431d55a87ac854d0a6cfd24f5678e2e5d042ca4c7ccc9d8ae36be" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.765634 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-kjpnf" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.896128 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-vhwqv"] Nov 23 09:17:11 crc kubenswrapper[5028]: E1123 09:17:11.896872 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee6d0e6-9f18-4e06-8334-685ed129a0c4" containerName="configure-os-openstack-openstack-cell1" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.896900 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee6d0e6-9f18-4e06-8334-685ed129a0c4" containerName="configure-os-openstack-openstack-cell1" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.897261 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee6d0e6-9f18-4e06-8334-685ed129a0c4" containerName="configure-os-openstack-openstack-cell1" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.898450 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.903886 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.904097 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.916254 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-vhwqv"] Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.941916 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ceph\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.942001 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.942049 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt98t\" (UniqueName: \"kubernetes.io/projected/e482f4c5-016f-44df-85f2-eab9a442ba9c-kube-api-access-qt98t\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.942081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-0\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.942549 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-1\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:11 crc kubenswrapper[5028]: I1123 09:17:11.942707 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.043677 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ceph\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.044210 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.044332 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt98t\" (UniqueName: \"kubernetes.io/projected/e482f4c5-016f-44df-85f2-eab9a442ba9c-kube-api-access-qt98t\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.044433 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-0\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.044571 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-1\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.044676 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.051586 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-networker\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.051645 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.051652 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-0\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.051837 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-1\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.054009 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ceph\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.075896 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt98t\" (UniqueName: \"kubernetes.io/projected/e482f4c5-016f-44df-85f2-eab9a442ba9c-kube-api-access-qt98t\") pod \"ssh-known-hosts-openstack-vhwqv\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.231820 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:12 crc kubenswrapper[5028]: I1123 09:17:12.883552 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-vhwqv"] Nov 23 09:17:13 crc kubenswrapper[5028]: I1123 09:17:13.790691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-vhwqv" event={"ID":"e482f4c5-016f-44df-85f2-eab9a442ba9c","Type":"ContainerStarted","Data":"58024ad4dc70581f11eda0605a63d744ff826d159faef0158ef926aa7bff6219"} Nov 23 09:17:13 crc kubenswrapper[5028]: I1123 09:17:13.791161 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-vhwqv" event={"ID":"e482f4c5-016f-44df-85f2-eab9a442ba9c","Type":"ContainerStarted","Data":"d0645d597ec7c5f11568b092c8a23b9c86cb24b6cf9994131e26001a1424483b"} Nov 23 09:17:13 crc kubenswrapper[5028]: I1123 09:17:13.821724 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-vhwqv" podStartSLOduration=2.393796255 podStartE2EDuration="2.821700852s" podCreationTimestamp="2025-11-23 09:17:11 +0000 UTC" firstStartedPulling="2025-11-23 09:17:12.904086832 +0000 UTC m=+8816.601491611" lastFinishedPulling="2025-11-23 09:17:13.331991429 +0000 UTC m=+8817.029396208" observedRunningTime="2025-11-23 09:17:13.810208586 +0000 UTC m=+8817.507613375" watchObservedRunningTime="2025-11-23 09:17:13.821700852 +0000 UTC m=+8817.519105631" Nov 23 09:17:30 crc kubenswrapper[5028]: I1123 09:17:30.946436 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:17:30 crc kubenswrapper[5028]: I1123 09:17:30.948612 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:17:30 crc kubenswrapper[5028]: I1123 09:17:30.993328 5028 generic.go:334] "Generic (PLEG): container finished" podID="e482f4c5-016f-44df-85f2-eab9a442ba9c" containerID="58024ad4dc70581f11eda0605a63d744ff826d159faef0158ef926aa7bff6219" exitCode=0 Nov 23 09:17:30 crc kubenswrapper[5028]: I1123 09:17:30.993415 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-vhwqv" event={"ID":"e482f4c5-016f-44df-85f2-eab9a442ba9c","Type":"ContainerDied","Data":"58024ad4dc70581f11eda0605a63d744ff826d159faef0158ef926aa7bff6219"} Nov 23 09:17:32 crc kubenswrapper[5028]: I1123 09:17:32.911156 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.000756 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ceph\") pod \"e482f4c5-016f-44df-85f2-eab9a442ba9c\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.000829 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-networker\") pod \"e482f4c5-016f-44df-85f2-eab9a442ba9c\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.000988 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-1\") pod \"e482f4c5-016f-44df-85f2-eab9a442ba9c\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.001060 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt98t\" (UniqueName: \"kubernetes.io/projected/e482f4c5-016f-44df-85f2-eab9a442ba9c-kube-api-access-qt98t\") pod \"e482f4c5-016f-44df-85f2-eab9a442ba9c\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.001100 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-0\") pod \"e482f4c5-016f-44df-85f2-eab9a442ba9c\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.001129 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-cell1\") pod \"e482f4c5-016f-44df-85f2-eab9a442ba9c\" (UID: \"e482f4c5-016f-44df-85f2-eab9a442ba9c\") " Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.011068 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ceph" (OuterVolumeSpecName: "ceph") pod "e482f4c5-016f-44df-85f2-eab9a442ba9c" (UID: "e482f4c5-016f-44df-85f2-eab9a442ba9c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.014544 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e482f4c5-016f-44df-85f2-eab9a442ba9c-kube-api-access-qt98t" (OuterVolumeSpecName: "kube-api-access-qt98t") pod "e482f4c5-016f-44df-85f2-eab9a442ba9c" (UID: "e482f4c5-016f-44df-85f2-eab9a442ba9c"). InnerVolumeSpecName "kube-api-access-qt98t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.034752 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-vhwqv" event={"ID":"e482f4c5-016f-44df-85f2-eab9a442ba9c","Type":"ContainerDied","Data":"d0645d597ec7c5f11568b092c8a23b9c86cb24b6cf9994131e26001a1424483b"} Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.034853 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0645d597ec7c5f11568b092c8a23b9c86cb24b6cf9994131e26001a1424483b" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.034801 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-vhwqv" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.046542 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e482f4c5-016f-44df-85f2-eab9a442ba9c" (UID: "e482f4c5-016f-44df-85f2-eab9a442ba9c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.091936 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e482f4c5-016f-44df-85f2-eab9a442ba9c" (UID: "e482f4c5-016f-44df-85f2-eab9a442ba9c"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.105130 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-1" (OuterVolumeSpecName: "inventory-1") pod "e482f4c5-016f-44df-85f2-eab9a442ba9c" (UID: "e482f4c5-016f-44df-85f2-eab9a442ba9c"). InnerVolumeSpecName "inventory-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.105416 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.105442 5028 reconciler_common.go:293] "Volume detached for volume \"inventory-1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.105453 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt98t\" (UniqueName: \"kubernetes.io/projected/e482f4c5-016f-44df-85f2-eab9a442ba9c-kube-api-access-qt98t\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.105465 5028 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.105475 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.122884 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-networker" (OuterVolumeSpecName: "ssh-key-openstack-networker") pod "e482f4c5-016f-44df-85f2-eab9a442ba9c" (UID: "e482f4c5-016f-44df-85f2-eab9a442ba9c"). InnerVolumeSpecName "ssh-key-openstack-networker". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.207711 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-networker\" (UniqueName: \"kubernetes.io/secret/e482f4c5-016f-44df-85f2-eab9a442ba9c-ssh-key-openstack-networker\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.214571 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-lmc6x"] Nov 23 09:17:33 crc kubenswrapper[5028]: E1123 09:17:33.215161 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e482f4c5-016f-44df-85f2-eab9a442ba9c" containerName="ssh-known-hosts-openstack" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.215187 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e482f4c5-016f-44df-85f2-eab9a442ba9c" containerName="ssh-known-hosts-openstack" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.215519 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e482f4c5-016f-44df-85f2-eab9a442ba9c" containerName="ssh-known-hosts-openstack" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.216627 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-lmc6x"] Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.216807 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.310726 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm59f\" (UniqueName: \"kubernetes.io/projected/677bbe2f-39e2-46e5-ad32-4234b984dbe3-kube-api-access-pm59f\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.311302 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ssh-key\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.311321 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-inventory\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.311427 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ceph\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.413501 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ceph\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.413672 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm59f\" (UniqueName: \"kubernetes.io/projected/677bbe2f-39e2-46e5-ad32-4234b984dbe3-kube-api-access-pm59f\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.413738 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ssh-key\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.413762 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-inventory\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.419456 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ceph\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.420084 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ssh-key\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.420616 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-inventory\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.431625 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm59f\" (UniqueName: \"kubernetes.io/projected/677bbe2f-39e2-46e5-ad32-4234b984dbe3-kube-api-access-pm59f\") pod \"run-os-openstack-openstack-cell1-lmc6x\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:33 crc kubenswrapper[5028]: I1123 09:17:33.539392 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:34 crc kubenswrapper[5028]: I1123 09:17:34.186217 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-lmc6x"] Nov 23 09:17:34 crc kubenswrapper[5028]: W1123 09:17:34.189217 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod677bbe2f_39e2_46e5_ad32_4234b984dbe3.slice/crio-ba3afa06db5d0c3381910ea2e3c9c2b06112fbd4c6e189a98c2791e821ff8a94 WatchSource:0}: Error finding container ba3afa06db5d0c3381910ea2e3c9c2b06112fbd4c6e189a98c2791e821ff8a94: Status 404 returned error can't find the container with id ba3afa06db5d0c3381910ea2e3c9c2b06112fbd4c6e189a98c2791e821ff8a94 Nov 23 09:17:35 crc kubenswrapper[5028]: I1123 09:17:35.066053 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" event={"ID":"677bbe2f-39e2-46e5-ad32-4234b984dbe3","Type":"ContainerStarted","Data":"accd773b07a5fa1da731f66ac6a8086fab73dc08028350dece70b4ebe5eb242b"} Nov 23 09:17:35 crc kubenswrapper[5028]: I1123 09:17:35.066559 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" event={"ID":"677bbe2f-39e2-46e5-ad32-4234b984dbe3","Type":"ContainerStarted","Data":"ba3afa06db5d0c3381910ea2e3c9c2b06112fbd4c6e189a98c2791e821ff8a94"} Nov 23 09:17:42 crc kubenswrapper[5028]: I1123 09:17:42.836852 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" podStartSLOduration=9.421941398 podStartE2EDuration="9.836823832s" podCreationTimestamp="2025-11-23 09:17:33 +0000 UTC" firstStartedPulling="2025-11-23 09:17:34.192295858 +0000 UTC m=+8837.889700637" lastFinishedPulling="2025-11-23 09:17:34.607178292 +0000 UTC m=+8838.304583071" observedRunningTime="2025-11-23 09:17:35.081759859 +0000 UTC m=+8838.779164638" watchObservedRunningTime="2025-11-23 09:17:42.836823832 +0000 UTC m=+8846.534228611" Nov 23 09:17:42 crc kubenswrapper[5028]: I1123 09:17:42.843557 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8l78d"] Nov 23 09:17:42 crc kubenswrapper[5028]: I1123 09:17:42.846406 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:42 crc kubenswrapper[5028]: I1123 09:17:42.864920 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l78d"] Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.019081 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjc9f\" (UniqueName: \"kubernetes.io/projected/2575d93a-136e-457d-8a07-0ff7485b500d-kube-api-access-sjc9f\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.019656 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-utilities\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.019889 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-catalog-content\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.122915 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-utilities\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.123117 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-catalog-content\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.123240 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjc9f\" (UniqueName: \"kubernetes.io/projected/2575d93a-136e-457d-8a07-0ff7485b500d-kube-api-access-sjc9f\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.123544 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-utilities\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.123577 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-catalog-content\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.150442 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjc9f\" (UniqueName: \"kubernetes.io/projected/2575d93a-136e-457d-8a07-0ff7485b500d-kube-api-access-sjc9f\") pod \"redhat-operators-8l78d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.259502 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:17:43 crc kubenswrapper[5028]: I1123 09:17:43.804115 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8l78d"] Nov 23 09:17:44 crc kubenswrapper[5028]: I1123 09:17:44.168869 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerStarted","Data":"d91bcd76e8533543ad4735a08b82455b869eebe0d0aa3b1e030d0c449dfd7c9f"} Nov 23 09:17:45 crc kubenswrapper[5028]: I1123 09:17:45.184211 5028 generic.go:334] "Generic (PLEG): container finished" podID="2575d93a-136e-457d-8a07-0ff7485b500d" containerID="97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78" exitCode=0 Nov 23 09:17:45 crc kubenswrapper[5028]: I1123 09:17:45.184307 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerDied","Data":"97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78"} Nov 23 09:17:45 crc kubenswrapper[5028]: I1123 09:17:45.189871 5028 generic.go:334] "Generic (PLEG): container finished" podID="677bbe2f-39e2-46e5-ad32-4234b984dbe3" containerID="accd773b07a5fa1da731f66ac6a8086fab73dc08028350dece70b4ebe5eb242b" exitCode=0 Nov 23 09:17:45 crc kubenswrapper[5028]: I1123 09:17:45.189923 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" event={"ID":"677bbe2f-39e2-46e5-ad32-4234b984dbe3","Type":"ContainerDied","Data":"accd773b07a5fa1da731f66ac6a8086fab73dc08028350dece70b4ebe5eb242b"} Nov 23 09:17:46 crc kubenswrapper[5028]: I1123 09:17:46.208013 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerStarted","Data":"a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe"} Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.099380 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.223584 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" event={"ID":"677bbe2f-39e2-46e5-ad32-4234b984dbe3","Type":"ContainerDied","Data":"ba3afa06db5d0c3381910ea2e3c9c2b06112fbd4c6e189a98c2791e821ff8a94"} Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.224981 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3afa06db5d0c3381910ea2e3c9c2b06112fbd4c6e189a98c2791e821ff8a94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.223654 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-lmc6x" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.228855 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-inventory\") pod \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.229215 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm59f\" (UniqueName: \"kubernetes.io/projected/677bbe2f-39e2-46e5-ad32-4234b984dbe3-kube-api-access-pm59f\") pod \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.229345 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ceph\") pod \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.229492 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ssh-key\") pod \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\" (UID: \"677bbe2f-39e2-46e5-ad32-4234b984dbe3\") " Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.243291 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ceph" (OuterVolumeSpecName: "ceph") pod "677bbe2f-39e2-46e5-ad32-4234b984dbe3" (UID: "677bbe2f-39e2-46e5-ad32-4234b984dbe3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.243451 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677bbe2f-39e2-46e5-ad32-4234b984dbe3-kube-api-access-pm59f" (OuterVolumeSpecName: "kube-api-access-pm59f") pod "677bbe2f-39e2-46e5-ad32-4234b984dbe3" (UID: "677bbe2f-39e2-46e5-ad32-4234b984dbe3"). InnerVolumeSpecName "kube-api-access-pm59f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.272255 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "677bbe2f-39e2-46e5-ad32-4234b984dbe3" (UID: "677bbe2f-39e2-46e5-ad32-4234b984dbe3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.283223 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-inventory" (OuterVolumeSpecName: "inventory") pod "677bbe2f-39e2-46e5-ad32-4234b984dbe3" (UID: "677bbe2f-39e2-46e5-ad32-4234b984dbe3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.314218 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-tzv94"] Nov 23 09:17:47 crc kubenswrapper[5028]: E1123 09:17:47.315085 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677bbe2f-39e2-46e5-ad32-4234b984dbe3" containerName="run-os-openstack-openstack-cell1" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.315113 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="677bbe2f-39e2-46e5-ad32-4234b984dbe3" containerName="run-os-openstack-openstack-cell1" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.315411 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="677bbe2f-39e2-46e5-ad32-4234b984dbe3" containerName="run-os-openstack-openstack-cell1" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.316453 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.328697 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-tzv94"] Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.332108 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.332132 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm59f\" (UniqueName: \"kubernetes.io/projected/677bbe2f-39e2-46e5-ad32-4234b984dbe3-kube-api-access-pm59f\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.332143 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.332156 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/677bbe2f-39e2-46e5-ad32-4234b984dbe3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.434384 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgzk\" (UniqueName: \"kubernetes.io/projected/cada4892-6d66-4825-8921-ff00960f0b66-kube-api-access-fqgzk\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.434473 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.434700 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-inventory\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.434887 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ceph\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.537616 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-inventory\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.537720 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ceph\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.537902 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgzk\" (UniqueName: \"kubernetes.io/projected/cada4892-6d66-4825-8921-ff00960f0b66-kube-api-access-fqgzk\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.537983 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.542402 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-inventory\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.542430 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ceph\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.542616 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.556033 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgzk\" (UniqueName: \"kubernetes.io/projected/cada4892-6d66-4825-8921-ff00960f0b66-kube-api-access-fqgzk\") pod \"reboot-os-openstack-openstack-cell1-tzv94\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:47 crc kubenswrapper[5028]: I1123 09:17:47.678223 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:17:48 crc kubenswrapper[5028]: I1123 09:17:48.300239 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-tzv94"] Nov 23 09:17:49 crc kubenswrapper[5028]: I1123 09:17:49.249165 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" event={"ID":"cada4892-6d66-4825-8921-ff00960f0b66","Type":"ContainerStarted","Data":"d695828841ebc4aa96cdc2a70dd8cfd729017134a17efc317dbef9401bd8d366"} Nov 23 09:17:49 crc kubenswrapper[5028]: I1123 09:17:49.250129 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" event={"ID":"cada4892-6d66-4825-8921-ff00960f0b66","Type":"ContainerStarted","Data":"4466eb93c85c701f19fb0b4e4bd1671357198e8bce024e1679c37b936b0652ea"} Nov 23 09:17:50 crc kubenswrapper[5028]: I1123 09:17:50.292756 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" podStartSLOduration=2.837932994 podStartE2EDuration="3.29272857s" podCreationTimestamp="2025-11-23 09:17:47 +0000 UTC" firstStartedPulling="2025-11-23 09:17:48.312161873 +0000 UTC m=+8852.009566652" lastFinishedPulling="2025-11-23 09:17:48.766957449 +0000 UTC m=+8852.464362228" observedRunningTime="2025-11-23 09:17:50.278360824 +0000 UTC m=+8853.975765613" watchObservedRunningTime="2025-11-23 09:17:50.29272857 +0000 UTC m=+8853.990133349" Nov 23 09:17:52 crc kubenswrapper[5028]: I1123 09:17:52.281933 5028 generic.go:334] "Generic (PLEG): container finished" podID="2575d93a-136e-457d-8a07-0ff7485b500d" containerID="a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe" exitCode=0 Nov 23 09:17:52 crc kubenswrapper[5028]: I1123 09:17:52.282044 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerDied","Data":"a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe"} Nov 23 09:17:53 crc kubenswrapper[5028]: I1123 09:17:53.295629 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerStarted","Data":"0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c"} Nov 23 09:17:53 crc kubenswrapper[5028]: I1123 09:17:53.327618 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8l78d" podStartSLOduration=3.777537496 podStartE2EDuration="11.32759051s" podCreationTimestamp="2025-11-23 09:17:42 +0000 UTC" firstStartedPulling="2025-11-23 09:17:45.186664434 +0000 UTC m=+8848.884069213" lastFinishedPulling="2025-11-23 09:17:52.736717418 +0000 UTC m=+8856.434122227" observedRunningTime="2025-11-23 09:17:53.315850419 +0000 UTC m=+8857.013255198" watchObservedRunningTime="2025-11-23 09:17:53.32759051 +0000 UTC m=+8857.024995289" Nov 23 09:18:00 crc kubenswrapper[5028]: I1123 09:18:00.946876 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:18:00 crc kubenswrapper[5028]: I1123 09:18:00.947566 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:18:03 crc kubenswrapper[5028]: I1123 09:18:03.259832 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:18:03 crc kubenswrapper[5028]: I1123 09:18:03.260257 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:18:04 crc kubenswrapper[5028]: I1123 09:18:04.317248 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8l78d" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="registry-server" probeResult="failure" output=< Nov 23 09:18:04 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:18:04 crc kubenswrapper[5028]: > Nov 23 09:18:05 crc kubenswrapper[5028]: I1123 09:18:05.433076 5028 generic.go:334] "Generic (PLEG): container finished" podID="cada4892-6d66-4825-8921-ff00960f0b66" containerID="d695828841ebc4aa96cdc2a70dd8cfd729017134a17efc317dbef9401bd8d366" exitCode=0 Nov 23 09:18:05 crc kubenswrapper[5028]: I1123 09:18:05.433118 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" event={"ID":"cada4892-6d66-4825-8921-ff00960f0b66","Type":"ContainerDied","Data":"d695828841ebc4aa96cdc2a70dd8cfd729017134a17efc317dbef9401bd8d366"} Nov 23 09:18:06 crc kubenswrapper[5028]: I1123 09:18:06.987603 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.004415 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ssh-key\") pod \"cada4892-6d66-4825-8921-ff00960f0b66\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.004818 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ceph\") pod \"cada4892-6d66-4825-8921-ff00960f0b66\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.005586 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqgzk\" (UniqueName: \"kubernetes.io/projected/cada4892-6d66-4825-8921-ff00960f0b66-kube-api-access-fqgzk\") pod \"cada4892-6d66-4825-8921-ff00960f0b66\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.007470 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-inventory\") pod \"cada4892-6d66-4825-8921-ff00960f0b66\" (UID: \"cada4892-6d66-4825-8921-ff00960f0b66\") " Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.019488 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ceph" (OuterVolumeSpecName: "ceph") pod "cada4892-6d66-4825-8921-ff00960f0b66" (UID: "cada4892-6d66-4825-8921-ff00960f0b66"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.021110 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cada4892-6d66-4825-8921-ff00960f0b66-kube-api-access-fqgzk" (OuterVolumeSpecName: "kube-api-access-fqgzk") pod "cada4892-6d66-4825-8921-ff00960f0b66" (UID: "cada4892-6d66-4825-8921-ff00960f0b66"). InnerVolumeSpecName "kube-api-access-fqgzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.055935 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-inventory" (OuterVolumeSpecName: "inventory") pod "cada4892-6d66-4825-8921-ff00960f0b66" (UID: "cada4892-6d66-4825-8921-ff00960f0b66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.084161 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cada4892-6d66-4825-8921-ff00960f0b66" (UID: "cada4892-6d66-4825-8921-ff00960f0b66"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.114939 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.114995 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.115012 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqgzk\" (UniqueName: \"kubernetes.io/projected/cada4892-6d66-4825-8921-ff00960f0b66-kube-api-access-fqgzk\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.115028 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cada4892-6d66-4825-8921-ff00960f0b66-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.460090 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" event={"ID":"cada4892-6d66-4825-8921-ff00960f0b66","Type":"ContainerDied","Data":"4466eb93c85c701f19fb0b4e4bd1671357198e8bce024e1679c37b936b0652ea"} Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.460602 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4466eb93c85c701f19fb0b4e4bd1671357198e8bce024e1679c37b936b0652ea" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.460233 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-tzv94" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.599871 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-l7q6n"] Nov 23 09:18:07 crc kubenswrapper[5028]: E1123 09:18:07.600436 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cada4892-6d66-4825-8921-ff00960f0b66" containerName="reboot-os-openstack-openstack-cell1" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.600455 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cada4892-6d66-4825-8921-ff00960f0b66" containerName="reboot-os-openstack-openstack-cell1" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.600662 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cada4892-6d66-4825-8921-ff00960f0b66" containerName="reboot-os-openstack-openstack-cell1" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.601611 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.604002 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.607833 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.638868 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-inventory\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639066 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ssh-key\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639152 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639542 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639610 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ceph\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639676 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvvb\" (UniqueName: \"kubernetes.io/projected/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-kube-api-access-xgvvb\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639738 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639795 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639823 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.639861 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.641781 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.641829 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.660277 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-l7q6n"] Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.742641 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.743068 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.743179 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ceph\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.743319 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvvb\" (UniqueName: \"kubernetes.io/projected/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-kube-api-access-xgvvb\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.743706 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.743856 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.744029 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.744185 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.744295 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.744409 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.744563 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-inventory\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.744691 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ssh-key\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.749820 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.749918 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.750602 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.750898 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ssh-key\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.752031 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ceph\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.752622 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.752762 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.753026 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-inventory\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.753059 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.757766 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.759028 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.760899 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvvb\" (UniqueName: \"kubernetes.io/projected/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-kube-api-access-xgvvb\") pod \"install-certs-openstack-openstack-cell1-l7q6n\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:07 crc kubenswrapper[5028]: I1123 09:18:07.979581 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:08 crc kubenswrapper[5028]: I1123 09:18:08.557031 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-l7q6n"] Nov 23 09:18:08 crc kubenswrapper[5028]: I1123 09:18:08.582212 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:18:09 crc kubenswrapper[5028]: I1123 09:18:09.490507 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" event={"ID":"abc38b1a-53f1-46e7-814d-eb2f2a1ee989","Type":"ContainerStarted","Data":"0f2bca27131532290f142f93897683d205db646829abe806854e7e945d6cfd56"} Nov 23 09:18:09 crc kubenswrapper[5028]: I1123 09:18:09.491546 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" event={"ID":"abc38b1a-53f1-46e7-814d-eb2f2a1ee989","Type":"ContainerStarted","Data":"1c20a91d0d1468805bf3121722dfef7e2387fdd48614f2e2368c761a90e26188"} Nov 23 09:18:09 crc kubenswrapper[5028]: I1123 09:18:09.523201 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" podStartSLOduration=2.045800768 podStartE2EDuration="2.523169604s" podCreationTimestamp="2025-11-23 09:18:07 +0000 UTC" firstStartedPulling="2025-11-23 09:18:08.581962388 +0000 UTC m=+8872.279367167" lastFinishedPulling="2025-11-23 09:18:09.059331224 +0000 UTC m=+8872.756736003" observedRunningTime="2025-11-23 09:18:09.517039452 +0000 UTC m=+8873.214444271" watchObservedRunningTime="2025-11-23 09:18:09.523169604 +0000 UTC m=+8873.220574403" Nov 23 09:18:14 crc kubenswrapper[5028]: I1123 09:18:14.321395 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8l78d" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="registry-server" probeResult="failure" output=< Nov 23 09:18:14 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:18:14 crc kubenswrapper[5028]: > Nov 23 09:18:23 crc kubenswrapper[5028]: I1123 09:18:23.312527 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:18:23 crc kubenswrapper[5028]: I1123 09:18:23.379585 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:18:23 crc kubenswrapper[5028]: I1123 09:18:23.565086 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l78d"] Nov 23 09:18:24 crc kubenswrapper[5028]: I1123 09:18:24.649240 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8l78d" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="registry-server" containerID="cri-o://0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c" gracePeriod=2 Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.197790 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.328607 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjc9f\" (UniqueName: \"kubernetes.io/projected/2575d93a-136e-457d-8a07-0ff7485b500d-kube-api-access-sjc9f\") pod \"2575d93a-136e-457d-8a07-0ff7485b500d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.328783 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-utilities\") pod \"2575d93a-136e-457d-8a07-0ff7485b500d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.329694 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-utilities" (OuterVolumeSpecName: "utilities") pod "2575d93a-136e-457d-8a07-0ff7485b500d" (UID: "2575d93a-136e-457d-8a07-0ff7485b500d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.329789 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-catalog-content\") pod \"2575d93a-136e-457d-8a07-0ff7485b500d\" (UID: \"2575d93a-136e-457d-8a07-0ff7485b500d\") " Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.330639 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.337289 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2575d93a-136e-457d-8a07-0ff7485b500d-kube-api-access-sjc9f" (OuterVolumeSpecName: "kube-api-access-sjc9f") pod "2575d93a-136e-457d-8a07-0ff7485b500d" (UID: "2575d93a-136e-457d-8a07-0ff7485b500d"). InnerVolumeSpecName "kube-api-access-sjc9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.432740 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjc9f\" (UniqueName: \"kubernetes.io/projected/2575d93a-136e-457d-8a07-0ff7485b500d-kube-api-access-sjc9f\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.443938 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2575d93a-136e-457d-8a07-0ff7485b500d" (UID: "2575d93a-136e-457d-8a07-0ff7485b500d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.535641 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2575d93a-136e-457d-8a07-0ff7485b500d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.666041 5028 generic.go:334] "Generic (PLEG): container finished" podID="2575d93a-136e-457d-8a07-0ff7485b500d" containerID="0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c" exitCode=0 Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.666097 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerDied","Data":"0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c"} Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.666131 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8l78d" event={"ID":"2575d93a-136e-457d-8a07-0ff7485b500d","Type":"ContainerDied","Data":"d91bcd76e8533543ad4735a08b82455b869eebe0d0aa3b1e030d0c449dfd7c9f"} Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.666151 5028 scope.go:117] "RemoveContainer" containerID="0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.666161 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8l78d" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.704447 5028 scope.go:117] "RemoveContainer" containerID="a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.704770 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8l78d"] Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.715651 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8l78d"] Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.743323 5028 scope.go:117] "RemoveContainer" containerID="97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.785620 5028 scope.go:117] "RemoveContainer" containerID="0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c" Nov 23 09:18:25 crc kubenswrapper[5028]: E1123 09:18:25.786376 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c\": container with ID starting with 0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c not found: ID does not exist" containerID="0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.786454 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c"} err="failed to get container status \"0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c\": rpc error: code = NotFound desc = could not find container \"0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c\": container with ID starting with 0f9a50a31eba05618244c475044d06c1f3e28c9734a717bcd5aa68899c264a4c not found: ID does not exist" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.786496 5028 scope.go:117] "RemoveContainer" containerID="a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe" Nov 23 09:18:25 crc kubenswrapper[5028]: E1123 09:18:25.787780 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe\": container with ID starting with a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe not found: ID does not exist" containerID="a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.787821 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe"} err="failed to get container status \"a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe\": rpc error: code = NotFound desc = could not find container \"a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe\": container with ID starting with a56efce24e85e5817960ad522a7a15d4d959a083b5c60a7e6beec820274a5fbe not found: ID does not exist" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.787852 5028 scope.go:117] "RemoveContainer" containerID="97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78" Nov 23 09:18:25 crc kubenswrapper[5028]: E1123 09:18:25.788351 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78\": container with ID starting with 97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78 not found: ID does not exist" containerID="97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78" Nov 23 09:18:25 crc kubenswrapper[5028]: I1123 09:18:25.788399 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78"} err="failed to get container status \"97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78\": rpc error: code = NotFound desc = could not find container \"97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78\": container with ID starting with 97f43f95233d2f0040eec1b8550dc662c4f8db9ffd04b16c4ac30a6fd4cf1a78 not found: ID does not exist" Nov 23 09:18:27 crc kubenswrapper[5028]: I1123 09:18:27.077713 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" path="/var/lib/kubelet/pods/2575d93a-136e-457d-8a07-0ff7485b500d/volumes" Nov 23 09:18:27 crc kubenswrapper[5028]: I1123 09:18:27.691766 5028 generic.go:334] "Generic (PLEG): container finished" podID="9cb79473-ee84-4aef-b25e-81acc20abf95" containerID="88a833a99f922cf24d028f2275130b4ece6eb02d7868e790ea503be6963b31dd" exitCode=0 Nov 23 09:18:27 crc kubenswrapper[5028]: I1123 09:18:27.691817 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-v4srx" event={"ID":"9cb79473-ee84-4aef-b25e-81acc20abf95","Type":"ContainerDied","Data":"88a833a99f922cf24d028f2275130b4ece6eb02d7868e790ea503be6963b31dd"} Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.215557 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.333418 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltvhg\" (UniqueName: \"kubernetes.io/projected/9cb79473-ee84-4aef-b25e-81acc20abf95-kube-api-access-ltvhg\") pod \"9cb79473-ee84-4aef-b25e-81acc20abf95\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.333660 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ssh-key\") pod \"9cb79473-ee84-4aef-b25e-81acc20abf95\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.333852 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-inventory\") pod \"9cb79473-ee84-4aef-b25e-81acc20abf95\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.333932 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ovn-combined-ca-bundle\") pod \"9cb79473-ee84-4aef-b25e-81acc20abf95\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.334254 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9cb79473-ee84-4aef-b25e-81acc20abf95-ovncontroller-config-0\") pod \"9cb79473-ee84-4aef-b25e-81acc20abf95\" (UID: \"9cb79473-ee84-4aef-b25e-81acc20abf95\") " Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.344047 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb79473-ee84-4aef-b25e-81acc20abf95-kube-api-access-ltvhg" (OuterVolumeSpecName: "kube-api-access-ltvhg") pod "9cb79473-ee84-4aef-b25e-81acc20abf95" (UID: "9cb79473-ee84-4aef-b25e-81acc20abf95"). InnerVolumeSpecName "kube-api-access-ltvhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.344353 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9cb79473-ee84-4aef-b25e-81acc20abf95" (UID: "9cb79473-ee84-4aef-b25e-81acc20abf95"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.374824 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-inventory" (OuterVolumeSpecName: "inventory") pod "9cb79473-ee84-4aef-b25e-81acc20abf95" (UID: "9cb79473-ee84-4aef-b25e-81acc20abf95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.382258 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cb79473-ee84-4aef-b25e-81acc20abf95-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9cb79473-ee84-4aef-b25e-81acc20abf95" (UID: "9cb79473-ee84-4aef-b25e-81acc20abf95"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.383677 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cb79473-ee84-4aef-b25e-81acc20abf95" (UID: "9cb79473-ee84-4aef-b25e-81acc20abf95"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.437538 5028 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9cb79473-ee84-4aef-b25e-81acc20abf95-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.438099 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltvhg\" (UniqueName: \"kubernetes.io/projected/9cb79473-ee84-4aef-b25e-81acc20abf95-kube-api-access-ltvhg\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.438117 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.438131 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.438146 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cb79473-ee84-4aef-b25e-81acc20abf95-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.741144 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-networker-v4srx" event={"ID":"9cb79473-ee84-4aef-b25e-81acc20abf95","Type":"ContainerDied","Data":"24ab438915057fd6b3e5afa952641d275808b45c329d4af89abe563f4d5b039f"} Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.741214 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ab438915057fd6b3e5afa952641d275808b45c329d4af89abe563f4d5b039f" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.741428 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-networker-v4srx" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.826391 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-tv8lb"] Nov 23 09:18:29 crc kubenswrapper[5028]: E1123 09:18:29.827002 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="registry-server" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.827023 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="registry-server" Nov 23 09:18:29 crc kubenswrapper[5028]: E1123 09:18:29.827046 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="extract-content" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.827055 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="extract-content" Nov 23 09:18:29 crc kubenswrapper[5028]: E1123 09:18:29.827087 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb79473-ee84-4aef-b25e-81acc20abf95" containerName="ovn-openstack-openstack-networker" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.827094 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb79473-ee84-4aef-b25e-81acc20abf95" containerName="ovn-openstack-openstack-networker" Nov 23 09:18:29 crc kubenswrapper[5028]: E1123 09:18:29.827113 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="extract-utilities" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.827120 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="extract-utilities" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.827405 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb79473-ee84-4aef-b25e-81acc20abf95" containerName="ovn-openstack-openstack-networker" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.827438 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2575d93a-136e-457d-8a07-0ff7485b500d" containerName="registry-server" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.828466 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.830375 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-networker-dockercfg-pnp6r" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.833359 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.834071 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.834606 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-networker" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.848139 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-tv8lb"] Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.951561 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-inventory\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.951641 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.951672 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.951761 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.965852 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:29 crc kubenswrapper[5028]: I1123 09:18:29.966067 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xf55\" (UniqueName: \"kubernetes.io/projected/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-kube-api-access-6xf55\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.069250 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.069891 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.070237 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.070449 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xf55\" (UniqueName: \"kubernetes.io/projected/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-kube-api-access-6xf55\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.070759 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-inventory\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.070925 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.076159 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.077695 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-inventory\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.079564 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.081126 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-ssh-key\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.103087 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.116782 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xf55\" (UniqueName: \"kubernetes.io/projected/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-kube-api-access-6xf55\") pod \"neutron-metadata-openstack-openstack-networker-tv8lb\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:30 crc kubenswrapper[5028]: I1123 09:18:30.166815 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.760054 5028 generic.go:334] "Generic (PLEG): container finished" podID="abc38b1a-53f1-46e7-814d-eb2f2a1ee989" containerID="0f2bca27131532290f142f93897683d205db646829abe806854e7e945d6cfd56" exitCode=0 Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.760859 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" event={"ID":"abc38b1a-53f1-46e7-814d-eb2f2a1ee989","Type":"ContainerDied","Data":"0f2bca27131532290f142f93897683d205db646829abe806854e7e945d6cfd56"} Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.947615 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.947710 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.947778 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.948992 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:30.949070 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" gracePeriod=600 Nov 23 09:18:31 crc kubenswrapper[5028]: E1123 09:18:31.083321 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:31.255097 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-networker-tv8lb"] Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:31.777121 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" event={"ID":"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12","Type":"ContainerStarted","Data":"a5707c6171918708c8a35c621f48c25dd6debb8971b08a7f37f8d887f7cb717f"} Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:31.780672 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" exitCode=0 Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:31.780748 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686"} Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:31.780797 5028 scope.go:117] "RemoveContainer" containerID="7fc33854a4e7d73aa92d206882e4ff95709d2d29698b9f2a99337a7d861512b5" Nov 23 09:18:31 crc kubenswrapper[5028]: I1123 09:18:31.782124 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:18:31 crc kubenswrapper[5028]: E1123 09:18:31.782672 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.401019 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.578873 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvvb\" (UniqueName: \"kubernetes.io/projected/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-kube-api-access-xgvvb\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579363 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ssh-key\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579409 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-inventory\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579427 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ceph\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579533 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ovn-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579616 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-bootstrap-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579666 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-nova-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579808 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-libvirt-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579852 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-metadata-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.579935 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-telemetry-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.580011 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-sriov-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.580038 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-dhcp-combined-ca-bundle\") pod \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\" (UID: \"abc38b1a-53f1-46e7-814d-eb2f2a1ee989\") " Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.585303 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.585420 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-kube-api-access-xgvvb" (OuterVolumeSpecName: "kube-api-access-xgvvb") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "kube-api-access-xgvvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.586346 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.586408 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.586531 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.586564 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ceph" (OuterVolumeSpecName: "ceph") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.586646 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.588256 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.588774 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.596911 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.612901 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.614221 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-inventory" (OuterVolumeSpecName: "inventory") pod "abc38b1a-53f1-46e7-814d-eb2f2a1ee989" (UID: "abc38b1a-53f1-46e7-814d-eb2f2a1ee989"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.682902 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.682968 5028 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.682982 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.682992 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683006 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvvb\" (UniqueName: \"kubernetes.io/projected/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-kube-api-access-xgvvb\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683017 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683030 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683041 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683051 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683061 5028 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683070 5028 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.683078 5028 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc38b1a-53f1-46e7-814d-eb2f2a1ee989-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.794839 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" event={"ID":"abc38b1a-53f1-46e7-814d-eb2f2a1ee989","Type":"ContainerDied","Data":"1c20a91d0d1468805bf3121722dfef7e2387fdd48614f2e2368c761a90e26188"} Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.795342 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c20a91d0d1468805bf3121722dfef7e2387fdd48614f2e2368c761a90e26188" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.794864 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-l7q6n" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.798272 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" event={"ID":"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12","Type":"ContainerStarted","Data":"e950e7e2eb818a662dda9be8dcadd7782783f6de4f2f809fbf78392079ced74b"} Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.855456 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" podStartSLOduration=3.208619453 podStartE2EDuration="3.855423043s" podCreationTimestamp="2025-11-23 09:18:29 +0000 UTC" firstStartedPulling="2025-11-23 09:18:31.267331705 +0000 UTC m=+8894.964736484" lastFinishedPulling="2025-11-23 09:18:31.914135285 +0000 UTC m=+8895.611540074" observedRunningTime="2025-11-23 09:18:32.832937765 +0000 UTC m=+8896.530342564" watchObservedRunningTime="2025-11-23 09:18:32.855423043 +0000 UTC m=+8896.552827822" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.905115 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-2pkl7"] Nov 23 09:18:32 crc kubenswrapper[5028]: E1123 09:18:32.905718 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc38b1a-53f1-46e7-814d-eb2f2a1ee989" containerName="install-certs-openstack-openstack-cell1" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.905739 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc38b1a-53f1-46e7-814d-eb2f2a1ee989" containerName="install-certs-openstack-openstack-cell1" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.905986 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc38b1a-53f1-46e7-814d-eb2f2a1ee989" containerName="install-certs-openstack-openstack-cell1" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.906841 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.916862 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.916933 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:18:32 crc kubenswrapper[5028]: I1123 09:18:32.918169 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-2pkl7"] Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.107748 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.108032 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-inventory\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.108221 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld9c\" (UniqueName: \"kubernetes.io/projected/72ef3240-29e1-4d7a-adae-a9c4916b6b72-kube-api-access-2ld9c\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.108265 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ceph\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.209253 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld9c\" (UniqueName: \"kubernetes.io/projected/72ef3240-29e1-4d7a-adae-a9c4916b6b72-kube-api-access-2ld9c\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.209308 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ceph\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.209401 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.209464 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-inventory\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.214498 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ceph\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.217798 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ssh-key\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.219635 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-inventory\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.233699 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld9c\" (UniqueName: \"kubernetes.io/projected/72ef3240-29e1-4d7a-adae-a9c4916b6b72-kube-api-access-2ld9c\") pod \"ceph-client-openstack-openstack-cell1-2pkl7\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:33 crc kubenswrapper[5028]: I1123 09:18:33.532563 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:34 crc kubenswrapper[5028]: I1123 09:18:34.211531 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-openstack-openstack-cell1-2pkl7"] Nov 23 09:18:34 crc kubenswrapper[5028]: I1123 09:18:34.826755 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" event={"ID":"72ef3240-29e1-4d7a-adae-a9c4916b6b72","Type":"ContainerStarted","Data":"4ec5bd844b73a32f8ec992e8ef0e52c88e9cd83427bb0ea19d9ba48e22ce253e"} Nov 23 09:18:35 crc kubenswrapper[5028]: I1123 09:18:35.840003 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" event={"ID":"72ef3240-29e1-4d7a-adae-a9c4916b6b72","Type":"ContainerStarted","Data":"be32d11e70a1bacaa1122e115ee8aabefc875b36b40727a870cee5901a7c7896"} Nov 23 09:18:35 crc kubenswrapper[5028]: I1123 09:18:35.873122 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" podStartSLOduration=3.458277462 podStartE2EDuration="3.873098836s" podCreationTimestamp="2025-11-23 09:18:32 +0000 UTC" firstStartedPulling="2025-11-23 09:18:34.223716747 +0000 UTC m=+8897.921121526" lastFinishedPulling="2025-11-23 09:18:34.638538121 +0000 UTC m=+8898.335942900" observedRunningTime="2025-11-23 09:18:35.856251218 +0000 UTC m=+8899.553655997" watchObservedRunningTime="2025-11-23 09:18:35.873098836 +0000 UTC m=+8899.570503615" Nov 23 09:18:42 crc kubenswrapper[5028]: I1123 09:18:42.086586 5028 generic.go:334] "Generic (PLEG): container finished" podID="72ef3240-29e1-4d7a-adae-a9c4916b6b72" containerID="be32d11e70a1bacaa1122e115ee8aabefc875b36b40727a870cee5901a7c7896" exitCode=0 Nov 23 09:18:42 crc kubenswrapper[5028]: I1123 09:18:42.086677 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" event={"ID":"72ef3240-29e1-4d7a-adae-a9c4916b6b72","Type":"ContainerDied","Data":"be32d11e70a1bacaa1122e115ee8aabefc875b36b40727a870cee5901a7c7896"} Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.053846 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:18:43 crc kubenswrapper[5028]: E1123 09:18:43.054536 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.576099 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.666273 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ld9c\" (UniqueName: \"kubernetes.io/projected/72ef3240-29e1-4d7a-adae-a9c4916b6b72-kube-api-access-2ld9c\") pod \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.666875 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-inventory\") pod \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.666923 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ceph\") pod \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.667048 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ssh-key\") pod \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\" (UID: \"72ef3240-29e1-4d7a-adae-a9c4916b6b72\") " Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.673075 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72ef3240-29e1-4d7a-adae-a9c4916b6b72-kube-api-access-2ld9c" (OuterVolumeSpecName: "kube-api-access-2ld9c") pod "72ef3240-29e1-4d7a-adae-a9c4916b6b72" (UID: "72ef3240-29e1-4d7a-adae-a9c4916b6b72"). InnerVolumeSpecName "kube-api-access-2ld9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.673405 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ceph" (OuterVolumeSpecName: "ceph") pod "72ef3240-29e1-4d7a-adae-a9c4916b6b72" (UID: "72ef3240-29e1-4d7a-adae-a9c4916b6b72"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.697372 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72ef3240-29e1-4d7a-adae-a9c4916b6b72" (UID: "72ef3240-29e1-4d7a-adae-a9c4916b6b72"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.703184 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-inventory" (OuterVolumeSpecName: "inventory") pod "72ef3240-29e1-4d7a-adae-a9c4916b6b72" (UID: "72ef3240-29e1-4d7a-adae-a9c4916b6b72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.769808 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.770026 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ld9c\" (UniqueName: \"kubernetes.io/projected/72ef3240-29e1-4d7a-adae-a9c4916b6b72-kube-api-access-2ld9c\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.770103 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:43 crc kubenswrapper[5028]: I1123 09:18:43.770161 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/72ef3240-29e1-4d7a-adae-a9c4916b6b72-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.113183 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" event={"ID":"72ef3240-29e1-4d7a-adae-a9c4916b6b72","Type":"ContainerDied","Data":"4ec5bd844b73a32f8ec992e8ef0e52c88e9cd83427bb0ea19d9ba48e22ce253e"} Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.113478 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec5bd844b73a32f8ec992e8ef0e52c88e9cd83427bb0ea19d9ba48e22ce253e" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.113283 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-openstack-openstack-cell1-2pkl7" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.189991 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-cd7kr"] Nov 23 09:18:44 crc kubenswrapper[5028]: E1123 09:18:44.192471 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72ef3240-29e1-4d7a-adae-a9c4916b6b72" containerName="ceph-client-openstack-openstack-cell1" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.192570 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="72ef3240-29e1-4d7a-adae-a9c4916b6b72" containerName="ceph-client-openstack-openstack-cell1" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.193001 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="72ef3240-29e1-4d7a-adae-a9c4916b6b72" containerName="ceph-client-openstack-openstack-cell1" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.194003 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.201980 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.202361 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.202555 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.214559 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-cd7kr"] Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.287866 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ceph\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.288049 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.290075 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfh7n\" (UniqueName: \"kubernetes.io/projected/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-kube-api-access-jfh7n\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.290146 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ssh-key\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.290214 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.290336 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-inventory\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.393297 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ceph\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.393427 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.393534 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfh7n\" (UniqueName: \"kubernetes.io/projected/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-kube-api-access-jfh7n\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.393562 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ssh-key\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.393592 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.393640 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-inventory\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.394523 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.399657 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.402604 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ssh-key\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.402934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ceph\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.416998 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-inventory\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.419265 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfh7n\" (UniqueName: \"kubernetes.io/projected/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-kube-api-access-jfh7n\") pod \"ovn-openstack-openstack-cell1-cd7kr\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:44 crc kubenswrapper[5028]: I1123 09:18:44.575636 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:18:45 crc kubenswrapper[5028]: I1123 09:18:45.115920 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-cd7kr"] Nov 23 09:18:46 crc kubenswrapper[5028]: I1123 09:18:46.142158 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" event={"ID":"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209","Type":"ContainerStarted","Data":"7635446a6da41329322b3a02ceb7e7ba9b4790e9129d1cdbd069206633f1eff5"} Nov 23 09:18:46 crc kubenswrapper[5028]: I1123 09:18:46.142737 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" event={"ID":"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209","Type":"ContainerStarted","Data":"762a68a1043fcad09fc3e1af2bf4a1931417917fb803b3a559243fc545182609"} Nov 23 09:18:46 crc kubenswrapper[5028]: I1123 09:18:46.167893 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" podStartSLOduration=1.680734962 podStartE2EDuration="2.167869837s" podCreationTimestamp="2025-11-23 09:18:44 +0000 UTC" firstStartedPulling="2025-11-23 09:18:45.122975119 +0000 UTC m=+8908.820379898" lastFinishedPulling="2025-11-23 09:18:45.610109994 +0000 UTC m=+8909.307514773" observedRunningTime="2025-11-23 09:18:46.161865706 +0000 UTC m=+8909.859270535" watchObservedRunningTime="2025-11-23 09:18:46.167869837 +0000 UTC m=+8909.865274616" Nov 23 09:18:56 crc kubenswrapper[5028]: I1123 09:18:56.053439 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:18:56 crc kubenswrapper[5028]: E1123 09:18:56.054247 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:19:07 crc kubenswrapper[5028]: I1123 09:19:07.061919 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:19:07 crc kubenswrapper[5028]: E1123 09:19:07.062995 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:19:18 crc kubenswrapper[5028]: I1123 09:19:18.054283 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:19:18 crc kubenswrapper[5028]: E1123 09:19:18.055315 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:19:32 crc kubenswrapper[5028]: I1123 09:19:32.054229 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:19:32 crc kubenswrapper[5028]: E1123 09:19:32.055506 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:19:36 crc kubenswrapper[5028]: I1123 09:19:36.742378 5028 generic.go:334] "Generic (PLEG): container finished" podID="1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" containerID="e950e7e2eb818a662dda9be8dcadd7782783f6de4f2f809fbf78392079ced74b" exitCode=0 Nov 23 09:19:36 crc kubenswrapper[5028]: I1123 09:19:36.742455 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" event={"ID":"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12","Type":"ContainerDied","Data":"e950e7e2eb818a662dda9be8dcadd7782783f6de4f2f809fbf78392079ced74b"} Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.240214 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.409165 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xf55\" (UniqueName: \"kubernetes.io/projected/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-kube-api-access-6xf55\") pod \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.409274 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-nova-metadata-neutron-config-0\") pod \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.409327 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-ssh-key\") pod \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.409564 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-ovn-metadata-agent-neutron-config-0\") pod \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.410056 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-inventory\") pod \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.410141 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-metadata-combined-ca-bundle\") pod \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\" (UID: \"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12\") " Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.415306 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-kube-api-access-6xf55" (OuterVolumeSpecName: "kube-api-access-6xf55") pod "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" (UID: "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12"). InnerVolumeSpecName "kube-api-access-6xf55". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.423759 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" (UID: "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.444138 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-inventory" (OuterVolumeSpecName: "inventory") pod "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" (UID: "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.447085 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" (UID: "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.449477 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" (UID: "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.452514 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" (UID: "1b88c41d-a1f0-4e15-9f88-4d7cb4602f12"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.512302 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.512342 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.512353 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.512364 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xf55\" (UniqueName: \"kubernetes.io/projected/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-kube-api-access-6xf55\") on node \"crc\" DevicePath \"\"" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.512374 5028 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.512384 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1b88c41d-a1f0-4e15-9f88-4d7cb4602f12-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.769245 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" event={"ID":"1b88c41d-a1f0-4e15-9f88-4d7cb4602f12","Type":"ContainerDied","Data":"a5707c6171918708c8a35c621f48c25dd6debb8971b08a7f37f8d887f7cb717f"} Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.769546 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5707c6171918708c8a35c621f48c25dd6debb8971b08a7f37f8d887f7cb717f" Nov 23 09:19:38 crc kubenswrapper[5028]: I1123 09:19:38.769349 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-networker-tv8lb" Nov 23 09:19:47 crc kubenswrapper[5028]: I1123 09:19:47.060099 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:19:47 crc kubenswrapper[5028]: E1123 09:19:47.061008 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:20:00 crc kubenswrapper[5028]: I1123 09:20:00.053300 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:20:00 crc kubenswrapper[5028]: E1123 09:20:00.054512 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:20:08 crc kubenswrapper[5028]: I1123 09:20:08.101982 5028 generic.go:334] "Generic (PLEG): container finished" podID="6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" containerID="7635446a6da41329322b3a02ceb7e7ba9b4790e9129d1cdbd069206633f1eff5" exitCode=0 Nov 23 09:20:08 crc kubenswrapper[5028]: I1123 09:20:08.102033 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" event={"ID":"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209","Type":"ContainerDied","Data":"7635446a6da41329322b3a02ceb7e7ba9b4790e9129d1cdbd069206633f1eff5"} Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.604117 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.706291 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ceph\") pod \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.706342 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-inventory\") pod \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.706409 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovn-combined-ca-bundle\") pod \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.706455 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovncontroller-config-0\") pod \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.706573 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ssh-key\") pod \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.706617 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfh7n\" (UniqueName: \"kubernetes.io/projected/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-kube-api-access-jfh7n\") pod \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\" (UID: \"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209\") " Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.714485 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" (UID: "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.714531 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-kube-api-access-jfh7n" (OuterVolumeSpecName: "kube-api-access-jfh7n") pod "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" (UID: "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209"). InnerVolumeSpecName "kube-api-access-jfh7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.715427 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ceph" (OuterVolumeSpecName: "ceph") pod "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" (UID: "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.735677 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" (UID: "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.748526 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" (UID: "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.749500 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-inventory" (OuterVolumeSpecName: "inventory") pod "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" (UID: "6e17e2f1-6fdf-4c0b-9634-d6152a4f3209"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.809757 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.809796 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfh7n\" (UniqueName: \"kubernetes.io/projected/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-kube-api-access-jfh7n\") on node \"crc\" DevicePath \"\"" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.809809 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.809819 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.809830 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:20:09 crc kubenswrapper[5028]: I1123 09:20:09.809838 5028 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6e17e2f1-6fdf-4c0b-9634-d6152a4f3209-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.126578 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" event={"ID":"6e17e2f1-6fdf-4c0b-9634-d6152a4f3209","Type":"ContainerDied","Data":"762a68a1043fcad09fc3e1af2bf4a1931417917fb803b3a559243fc545182609"} Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.126647 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="762a68a1043fcad09fc3e1af2bf4a1931417917fb803b3a559243fc545182609" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.126661 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-cd7kr" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.222783 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-sdbhj"] Nov 23 09:20:10 crc kubenswrapper[5028]: E1123 09:20:10.223392 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" containerName="ovn-openstack-openstack-cell1" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.223414 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" containerName="ovn-openstack-openstack-cell1" Nov 23 09:20:10 crc kubenswrapper[5028]: E1123 09:20:10.223437 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" containerName="neutron-metadata-openstack-openstack-networker" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.223446 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" containerName="neutron-metadata-openstack-openstack-networker" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.223698 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b88c41d-a1f0-4e15-9f88-4d7cb4602f12" containerName="neutron-metadata-openstack-openstack-networker" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.223714 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e17e2f1-6fdf-4c0b-9634-d6152a4f3209" containerName="ovn-openstack-openstack-cell1" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.224654 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.232660 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.232932 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.233087 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.234554 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.234566 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.235410 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.236550 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-sdbhj"] Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.324021 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6p2\" (UniqueName: \"kubernetes.io/projected/30480d6a-dd5f-4f67-9557-a45343d87a65-kube-api-access-jf6p2\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.324605 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.324944 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.325137 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.325257 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.325360 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.325438 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428269 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428336 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428401 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6p2\" (UniqueName: \"kubernetes.io/projected/30480d6a-dd5f-4f67-9557-a45343d87a65-kube-api-access-jf6p2\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428422 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428533 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428580 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.428614 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.654063 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.655024 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.655487 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.661722 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ceph\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.661779 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6p2\" (UniqueName: \"kubernetes.io/projected/30480d6a-dd5f-4f67-9557-a45343d87a65-kube-api-access-jf6p2\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.661970 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.663843 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-sdbhj\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:10 crc kubenswrapper[5028]: I1123 09:20:10.847672 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:20:11 crc kubenswrapper[5028]: W1123 09:20:11.224019 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30480d6a_dd5f_4f67_9557_a45343d87a65.slice/crio-f7e372dfbb25cb776f80feb27c9e595cbdca7ac57d4973353b0a94fea2c5dcf0 WatchSource:0}: Error finding container f7e372dfbb25cb776f80feb27c9e595cbdca7ac57d4973353b0a94fea2c5dcf0: Status 404 returned error can't find the container with id f7e372dfbb25cb776f80feb27c9e595cbdca7ac57d4973353b0a94fea2c5dcf0 Nov 23 09:20:11 crc kubenswrapper[5028]: I1123 09:20:11.227191 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-sdbhj"] Nov 23 09:20:12 crc kubenswrapper[5028]: I1123 09:20:12.159221 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" event={"ID":"30480d6a-dd5f-4f67-9557-a45343d87a65","Type":"ContainerStarted","Data":"bc4d728ba36c59b724c939297dce407d28dcb44fee512b82056e525d813bea2a"} Nov 23 09:20:12 crc kubenswrapper[5028]: I1123 09:20:12.159857 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" event={"ID":"30480d6a-dd5f-4f67-9557-a45343d87a65","Type":"ContainerStarted","Data":"f7e372dfbb25cb776f80feb27c9e595cbdca7ac57d4973353b0a94fea2c5dcf0"} Nov 23 09:20:12 crc kubenswrapper[5028]: I1123 09:20:12.188812 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" podStartSLOduration=1.768571136 podStartE2EDuration="2.188790134s" podCreationTimestamp="2025-11-23 09:20:10 +0000 UTC" firstStartedPulling="2025-11-23 09:20:11.226831795 +0000 UTC m=+8994.924236574" lastFinishedPulling="2025-11-23 09:20:11.647050793 +0000 UTC m=+8995.344455572" observedRunningTime="2025-11-23 09:20:12.185611185 +0000 UTC m=+8995.883015964" watchObservedRunningTime="2025-11-23 09:20:12.188790134 +0000 UTC m=+8995.886194913" Nov 23 09:20:13 crc kubenswrapper[5028]: I1123 09:20:13.053293 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:20:13 crc kubenswrapper[5028]: E1123 09:20:13.054363 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:20:24 crc kubenswrapper[5028]: I1123 09:20:24.054236 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:20:24 crc kubenswrapper[5028]: E1123 09:20:24.055694 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:20:38 crc kubenswrapper[5028]: I1123 09:20:38.054406 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:20:38 crc kubenswrapper[5028]: E1123 09:20:38.055392 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:20:52 crc kubenswrapper[5028]: I1123 09:20:52.053638 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:20:52 crc kubenswrapper[5028]: E1123 09:20:52.054560 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:21:07 crc kubenswrapper[5028]: I1123 09:21:07.062649 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:21:07 crc kubenswrapper[5028]: E1123 09:21:07.064598 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:21:08 crc kubenswrapper[5028]: I1123 09:21:08.850843 5028 generic.go:334] "Generic (PLEG): container finished" podID="30480d6a-dd5f-4f67-9557-a45343d87a65" containerID="bc4d728ba36c59b724c939297dce407d28dcb44fee512b82056e525d813bea2a" exitCode=0 Nov 23 09:21:08 crc kubenswrapper[5028]: I1123 09:21:08.850991 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" event={"ID":"30480d6a-dd5f-4f67-9557-a45343d87a65","Type":"ContainerDied","Data":"bc4d728ba36c59b724c939297dce407d28dcb44fee512b82056e525d813bea2a"} Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.410139 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585619 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585700 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ssh-key\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585777 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf6p2\" (UniqueName: \"kubernetes.io/projected/30480d6a-dd5f-4f67-9557-a45343d87a65-kube-api-access-jf6p2\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585810 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ceph\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585846 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-nova-metadata-neutron-config-0\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585887 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-inventory\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.585920 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-metadata-combined-ca-bundle\") pod \"30480d6a-dd5f-4f67-9557-a45343d87a65\" (UID: \"30480d6a-dd5f-4f67-9557-a45343d87a65\") " Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.593114 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ceph" (OuterVolumeSpecName: "ceph") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.593642 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30480d6a-dd5f-4f67-9557-a45343d87a65-kube-api-access-jf6p2" (OuterVolumeSpecName: "kube-api-access-jf6p2") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "kube-api-access-jf6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.595064 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.624043 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.624056 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-inventory" (OuterVolumeSpecName: "inventory") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.625927 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.626200 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "30480d6a-dd5f-4f67-9557-a45343d87a65" (UID: "30480d6a-dd5f-4f67-9557-a45343d87a65"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688751 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688788 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf6p2\" (UniqueName: \"kubernetes.io/projected/30480d6a-dd5f-4f67-9557-a45343d87a65-kube-api-access-jf6p2\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688803 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688812 5028 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688821 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688834 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.688845 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/30480d6a-dd5f-4f67-9557-a45343d87a65-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.878401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" event={"ID":"30480d6a-dd5f-4f67-9557-a45343d87a65","Type":"ContainerDied","Data":"f7e372dfbb25cb776f80feb27c9e595cbdca7ac57d4973353b0a94fea2c5dcf0"} Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.878730 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e372dfbb25cb776f80feb27c9e595cbdca7ac57d4973353b0a94fea2c5dcf0" Nov 23 09:21:10 crc kubenswrapper[5028]: I1123 09:21:10.878561 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-sdbhj" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.020285 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-f6p9x"] Nov 23 09:21:11 crc kubenswrapper[5028]: E1123 09:21:11.020878 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30480d6a-dd5f-4f67-9557-a45343d87a65" containerName="neutron-metadata-openstack-openstack-cell1" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.020898 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="30480d6a-dd5f-4f67-9557-a45343d87a65" containerName="neutron-metadata-openstack-openstack-cell1" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.021181 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="30480d6a-dd5f-4f67-9557-a45343d87a65" containerName="neutron-metadata-openstack-openstack-cell1" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.024100 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.027220 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.027364 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.027319 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.028920 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.029375 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.033671 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-f6p9x"] Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.098485 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.098552 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwrxg\" (UniqueName: \"kubernetes.io/projected/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-kube-api-access-zwrxg\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.100026 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ssh-key\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.100226 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-inventory\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.100419 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ceph\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.101002 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.203004 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwrxg\" (UniqueName: \"kubernetes.io/projected/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-kube-api-access-zwrxg\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.203080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ssh-key\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.203113 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-inventory\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.203152 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ceph\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.203279 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.203329 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.208337 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ssh-key\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.208402 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.208728 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ceph\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.212621 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-inventory\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.214849 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.222204 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwrxg\" (UniqueName: \"kubernetes.io/projected/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-kube-api-access-zwrxg\") pod \"libvirt-openstack-openstack-cell1-f6p9x\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.381247 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:21:11 crc kubenswrapper[5028]: I1123 09:21:11.970183 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-f6p9x"] Nov 23 09:21:12 crc kubenswrapper[5028]: I1123 09:21:12.901658 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" event={"ID":"da2b8220-d1d8-40ae-a96a-54e3a1c13c10","Type":"ContainerStarted","Data":"37a2c6bd4ccda14f2e138433edd91cbbc8c8695df4c05a4b1cd3a604abb9c0ed"} Nov 23 09:21:12 crc kubenswrapper[5028]: I1123 09:21:12.902222 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" event={"ID":"da2b8220-d1d8-40ae-a96a-54e3a1c13c10","Type":"ContainerStarted","Data":"ed2a51a6a7ef717761b3869889578b2471cc8e7474ef3a05f736f27561b9312f"} Nov 23 09:21:12 crc kubenswrapper[5028]: I1123 09:21:12.922031 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" podStartSLOduration=2.455788627 podStartE2EDuration="2.922009387s" podCreationTimestamp="2025-11-23 09:21:10 +0000 UTC" firstStartedPulling="2025-11-23 09:21:11.982826139 +0000 UTC m=+9055.680230918" lastFinishedPulling="2025-11-23 09:21:12.449046899 +0000 UTC m=+9056.146451678" observedRunningTime="2025-11-23 09:21:12.921428423 +0000 UTC m=+9056.618833202" watchObservedRunningTime="2025-11-23 09:21:12.922009387 +0000 UTC m=+9056.619414166" Nov 23 09:21:18 crc kubenswrapper[5028]: I1123 09:21:18.053560 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:21:18 crc kubenswrapper[5028]: E1123 09:21:18.054812 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:21:33 crc kubenswrapper[5028]: I1123 09:21:33.075066 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:21:33 crc kubenswrapper[5028]: E1123 09:21:33.077861 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:21:45 crc kubenswrapper[5028]: I1123 09:21:45.053922 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:21:45 crc kubenswrapper[5028]: E1123 09:21:45.054826 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:21:56 crc kubenswrapper[5028]: I1123 09:21:56.053421 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:21:56 crc kubenswrapper[5028]: E1123 09:21:56.054072 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.232773 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dkggm"] Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.235783 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.281869 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkggm"] Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.391310 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-catalog-content\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.391918 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-utilities\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.392108 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5wtg\" (UniqueName: \"kubernetes.io/projected/7eb38402-324e-43c4-904f-06dfddc7d731-kube-api-access-p5wtg\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.493757 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-utilities\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.494032 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5wtg\" (UniqueName: \"kubernetes.io/projected/7eb38402-324e-43c4-904f-06dfddc7d731-kube-api-access-p5wtg\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.494080 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-catalog-content\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.494301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-utilities\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.494641 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-catalog-content\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.555800 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5wtg\" (UniqueName: \"kubernetes.io/projected/7eb38402-324e-43c4-904f-06dfddc7d731-kube-api-access-p5wtg\") pod \"certified-operators-dkggm\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:58 crc kubenswrapper[5028]: I1123 09:21:58.572293 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:21:59 crc kubenswrapper[5028]: I1123 09:21:59.207731 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dkggm"] Nov 23 09:21:59 crc kubenswrapper[5028]: I1123 09:21:59.456867 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerStarted","Data":"581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c"} Nov 23 09:21:59 crc kubenswrapper[5028]: I1123 09:21:59.456915 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerStarted","Data":"4be22b56a4fdb2e6cb2ac8be8ade841097308d529384d2c2381ec6355889ca95"} Nov 23 09:22:00 crc kubenswrapper[5028]: I1123 09:22:00.469447 5028 generic.go:334] "Generic (PLEG): container finished" podID="7eb38402-324e-43c4-904f-06dfddc7d731" containerID="581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c" exitCode=0 Nov 23 09:22:00 crc kubenswrapper[5028]: I1123 09:22:00.469528 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerDied","Data":"581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c"} Nov 23 09:22:01 crc kubenswrapper[5028]: I1123 09:22:01.523072 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerStarted","Data":"24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224"} Nov 23 09:22:02 crc kubenswrapper[5028]: I1123 09:22:02.534277 5028 generic.go:334] "Generic (PLEG): container finished" podID="7eb38402-324e-43c4-904f-06dfddc7d731" containerID="24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224" exitCode=0 Nov 23 09:22:02 crc kubenswrapper[5028]: I1123 09:22:02.534407 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerDied","Data":"24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224"} Nov 23 09:22:03 crc kubenswrapper[5028]: I1123 09:22:03.548678 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerStarted","Data":"f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f"} Nov 23 09:22:03 crc kubenswrapper[5028]: I1123 09:22:03.584563 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dkggm" podStartSLOduration=2.089756193 podStartE2EDuration="5.584540909s" podCreationTimestamp="2025-11-23 09:21:58 +0000 UTC" firstStartedPulling="2025-11-23 09:21:59.459250158 +0000 UTC m=+9103.156654927" lastFinishedPulling="2025-11-23 09:22:02.954034864 +0000 UTC m=+9106.651439643" observedRunningTime="2025-11-23 09:22:03.569247926 +0000 UTC m=+9107.266652745" watchObservedRunningTime="2025-11-23 09:22:03.584540909 +0000 UTC m=+9107.281945688" Nov 23 09:22:07 crc kubenswrapper[5028]: I1123 09:22:07.061510 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:22:07 crc kubenswrapper[5028]: E1123 09:22:07.062299 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:22:08 crc kubenswrapper[5028]: I1123 09:22:08.572934 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:22:08 crc kubenswrapper[5028]: I1123 09:22:08.573019 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:22:09 crc kubenswrapper[5028]: I1123 09:22:09.000456 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:22:09 crc kubenswrapper[5028]: I1123 09:22:09.067367 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:22:09 crc kubenswrapper[5028]: I1123 09:22:09.248686 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkggm"] Nov 23 09:22:10 crc kubenswrapper[5028]: I1123 09:22:10.613283 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dkggm" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="registry-server" containerID="cri-o://f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f" gracePeriod=2 Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.624377 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.629066 5028 generic.go:334] "Generic (PLEG): container finished" podID="7eb38402-324e-43c4-904f-06dfddc7d731" containerID="f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f" exitCode=0 Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.629155 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerDied","Data":"f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f"} Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.629556 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dkggm" event={"ID":"7eb38402-324e-43c4-904f-06dfddc7d731","Type":"ContainerDied","Data":"4be22b56a4fdb2e6cb2ac8be8ade841097308d529384d2c2381ec6355889ca95"} Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.629626 5028 scope.go:117] "RemoveContainer" containerID="f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.660249 5028 scope.go:117] "RemoveContainer" containerID="24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.687606 5028 scope.go:117] "RemoveContainer" containerID="581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.689932 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-catalog-content\") pod \"7eb38402-324e-43c4-904f-06dfddc7d731\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.690140 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5wtg\" (UniqueName: \"kubernetes.io/projected/7eb38402-324e-43c4-904f-06dfddc7d731-kube-api-access-p5wtg\") pod \"7eb38402-324e-43c4-904f-06dfddc7d731\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.690187 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-utilities\") pod \"7eb38402-324e-43c4-904f-06dfddc7d731\" (UID: \"7eb38402-324e-43c4-904f-06dfddc7d731\") " Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.693613 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-utilities" (OuterVolumeSpecName: "utilities") pod "7eb38402-324e-43c4-904f-06dfddc7d731" (UID: "7eb38402-324e-43c4-904f-06dfddc7d731"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.698472 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb38402-324e-43c4-904f-06dfddc7d731-kube-api-access-p5wtg" (OuterVolumeSpecName: "kube-api-access-p5wtg") pod "7eb38402-324e-43c4-904f-06dfddc7d731" (UID: "7eb38402-324e-43c4-904f-06dfddc7d731"). InnerVolumeSpecName "kube-api-access-p5wtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.741370 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eb38402-324e-43c4-904f-06dfddc7d731" (UID: "7eb38402-324e-43c4-904f-06dfddc7d731"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.792919 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5wtg\" (UniqueName: \"kubernetes.io/projected/7eb38402-324e-43c4-904f-06dfddc7d731-kube-api-access-p5wtg\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.792988 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.793004 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb38402-324e-43c4-904f-06dfddc7d731-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.867043 5028 scope.go:117] "RemoveContainer" containerID="f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f" Nov 23 09:22:11 crc kubenswrapper[5028]: E1123 09:22:11.867574 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f\": container with ID starting with f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f not found: ID does not exist" containerID="f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.867620 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f"} err="failed to get container status \"f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f\": rpc error: code = NotFound desc = could not find container \"f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f\": container with ID starting with f655d287d4749bf31df57d64c5547b343ea01326aaef3b59444707efe2f9503f not found: ID does not exist" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.867648 5028 scope.go:117] "RemoveContainer" containerID="24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224" Nov 23 09:22:11 crc kubenswrapper[5028]: E1123 09:22:11.868035 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224\": container with ID starting with 24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224 not found: ID does not exist" containerID="24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.868089 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224"} err="failed to get container status \"24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224\": rpc error: code = NotFound desc = could not find container \"24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224\": container with ID starting with 24242a202f91ca7fc952236f175c1eb6495d7c8101985e87f2da6e0c59854224 not found: ID does not exist" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.868120 5028 scope.go:117] "RemoveContainer" containerID="581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c" Nov 23 09:22:11 crc kubenswrapper[5028]: E1123 09:22:11.868534 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c\": container with ID starting with 581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c not found: ID does not exist" containerID="581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c" Nov 23 09:22:11 crc kubenswrapper[5028]: I1123 09:22:11.868579 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c"} err="failed to get container status \"581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c\": rpc error: code = NotFound desc = could not find container \"581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c\": container with ID starting with 581e84bda6bdf1f60daad0b30607cca2d437186ee10942e88a0948dc739a938c not found: ID does not exist" Nov 23 09:22:12 crc kubenswrapper[5028]: I1123 09:22:12.639872 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dkggm" Nov 23 09:22:12 crc kubenswrapper[5028]: I1123 09:22:12.675025 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dkggm"] Nov 23 09:22:12 crc kubenswrapper[5028]: I1123 09:22:12.689509 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dkggm"] Nov 23 09:22:13 crc kubenswrapper[5028]: I1123 09:22:13.065069 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" path="/var/lib/kubelet/pods/7eb38402-324e-43c4-904f-06dfddc7d731/volumes" Nov 23 09:22:19 crc kubenswrapper[5028]: I1123 09:22:19.053582 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:22:19 crc kubenswrapper[5028]: E1123 09:22:19.054379 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:22:31 crc kubenswrapper[5028]: I1123 09:22:31.054347 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:22:31 crc kubenswrapper[5028]: E1123 09:22:31.055498 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:22:46 crc kubenswrapper[5028]: I1123 09:22:46.054714 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:22:46 crc kubenswrapper[5028]: E1123 09:22:46.056631 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:22:59 crc kubenswrapper[5028]: I1123 09:22:59.054069 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:22:59 crc kubenswrapper[5028]: E1123 09:22:59.056625 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:23:14 crc kubenswrapper[5028]: I1123 09:23:14.053865 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:23:14 crc kubenswrapper[5028]: E1123 09:23:14.055322 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:23:27 crc kubenswrapper[5028]: I1123 09:23:27.069837 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:23:27 crc kubenswrapper[5028]: E1123 09:23:27.071243 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:23:42 crc kubenswrapper[5028]: I1123 09:23:42.058817 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:23:42 crc kubenswrapper[5028]: I1123 09:23:42.734144 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"efcc42724720e9fa7c2aca2012d555e223a989f60cd99c2dbeeedbe4692d339d"} Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.484485 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dzlq"] Nov 23 09:24:41 crc kubenswrapper[5028]: E1123 09:24:41.485558 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="extract-utilities" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.485582 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="extract-utilities" Nov 23 09:24:41 crc kubenswrapper[5028]: E1123 09:24:41.485608 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="registry-server" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.485614 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="registry-server" Nov 23 09:24:41 crc kubenswrapper[5028]: E1123 09:24:41.485653 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="extract-content" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.485659 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="extract-content" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.485883 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb38402-324e-43c4-904f-06dfddc7d731" containerName="registry-server" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.487852 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.503735 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dzlq"] Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.573325 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-catalog-content\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.573454 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vrn\" (UniqueName: \"kubernetes.io/projected/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-kube-api-access-95vrn\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.573490 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-utilities\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.675522 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-catalog-content\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.675643 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95vrn\" (UniqueName: \"kubernetes.io/projected/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-kube-api-access-95vrn\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.675669 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-utilities\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.676258 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-utilities\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.676512 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-catalog-content\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.707172 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95vrn\" (UniqueName: \"kubernetes.io/projected/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-kube-api-access-95vrn\") pod \"redhat-marketplace-6dzlq\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:41 crc kubenswrapper[5028]: I1123 09:24:41.823728 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:42 crc kubenswrapper[5028]: I1123 09:24:42.337155 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dzlq"] Nov 23 09:24:42 crc kubenswrapper[5028]: I1123 09:24:42.438785 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerStarted","Data":"84b6806a869ef933d9b6e48c6963877cf986b99e99a3fb4c7997e39d62a20f65"} Nov 23 09:24:43 crc kubenswrapper[5028]: I1123 09:24:43.455846 5028 generic.go:334] "Generic (PLEG): container finished" podID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerID="2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00" exitCode=0 Nov 23 09:24:43 crc kubenswrapper[5028]: I1123 09:24:43.456050 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerDied","Data":"2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00"} Nov 23 09:24:43 crc kubenswrapper[5028]: I1123 09:24:43.459285 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:24:44 crc kubenswrapper[5028]: I1123 09:24:44.474025 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerStarted","Data":"bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4"} Nov 23 09:24:45 crc kubenswrapper[5028]: I1123 09:24:45.488176 5028 generic.go:334] "Generic (PLEG): container finished" podID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerID="bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4" exitCode=0 Nov 23 09:24:45 crc kubenswrapper[5028]: I1123 09:24:45.488265 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerDied","Data":"bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4"} Nov 23 09:24:46 crc kubenswrapper[5028]: I1123 09:24:46.502724 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerStarted","Data":"bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb"} Nov 23 09:24:46 crc kubenswrapper[5028]: I1123 09:24:46.528956 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dzlq" podStartSLOduration=3.04544336 podStartE2EDuration="5.528921179s" podCreationTimestamp="2025-11-23 09:24:41 +0000 UTC" firstStartedPulling="2025-11-23 09:24:43.458982047 +0000 UTC m=+9267.156386826" lastFinishedPulling="2025-11-23 09:24:45.942459866 +0000 UTC m=+9269.639864645" observedRunningTime="2025-11-23 09:24:46.523160984 +0000 UTC m=+9270.220565803" watchObservedRunningTime="2025-11-23 09:24:46.528921179 +0000 UTC m=+9270.226325958" Nov 23 09:24:51 crc kubenswrapper[5028]: I1123 09:24:51.824605 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:51 crc kubenswrapper[5028]: I1123 09:24:51.825467 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:51 crc kubenswrapper[5028]: I1123 09:24:51.887524 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:52 crc kubenswrapper[5028]: I1123 09:24:52.616473 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:52 crc kubenswrapper[5028]: I1123 09:24:52.666532 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dzlq"] Nov 23 09:24:54 crc kubenswrapper[5028]: I1123 09:24:54.588619 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dzlq" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="registry-server" containerID="cri-o://bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb" gracePeriod=2 Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.115785 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.207226 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-catalog-content\") pod \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.207621 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95vrn\" (UniqueName: \"kubernetes.io/projected/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-kube-api-access-95vrn\") pod \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.208397 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-utilities\") pod \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\" (UID: \"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8\") " Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.210601 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-utilities" (OuterVolumeSpecName: "utilities") pod "f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" (UID: "f04d2eb6-c0b8-4171-bc76-e0b547daa7f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.217182 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-kube-api-access-95vrn" (OuterVolumeSpecName: "kube-api-access-95vrn") pod "f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" (UID: "f04d2eb6-c0b8-4171-bc76-e0b547daa7f8"). InnerVolumeSpecName "kube-api-access-95vrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.230633 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" (UID: "f04d2eb6-c0b8-4171-bc76-e0b547daa7f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.313081 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.313158 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95vrn\" (UniqueName: \"kubernetes.io/projected/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-kube-api-access-95vrn\") on node \"crc\" DevicePath \"\"" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.313186 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.604257 5028 generic.go:334] "Generic (PLEG): container finished" podID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerID="bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb" exitCode=0 Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.604324 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerDied","Data":"bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb"} Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.604353 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dzlq" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.604380 5028 scope.go:117] "RemoveContainer" containerID="bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.604365 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dzlq" event={"ID":"f04d2eb6-c0b8-4171-bc76-e0b547daa7f8","Type":"ContainerDied","Data":"84b6806a869ef933d9b6e48c6963877cf986b99e99a3fb4c7997e39d62a20f65"} Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.633150 5028 scope.go:117] "RemoveContainer" containerID="bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.656898 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dzlq"] Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.670013 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dzlq"] Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.696074 5028 scope.go:117] "RemoveContainer" containerID="2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.731016 5028 scope.go:117] "RemoveContainer" containerID="bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb" Nov 23 09:24:55 crc kubenswrapper[5028]: E1123 09:24:55.731482 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb\": container with ID starting with bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb not found: ID does not exist" containerID="bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.731515 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb"} err="failed to get container status \"bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb\": rpc error: code = NotFound desc = could not find container \"bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb\": container with ID starting with bfdfad2b939f05fa348cc6aff9b61ce504e54bff81bb5dd0c67bf8d36ed01cdb not found: ID does not exist" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.731541 5028 scope.go:117] "RemoveContainer" containerID="bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4" Nov 23 09:24:55 crc kubenswrapper[5028]: E1123 09:24:55.731912 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4\": container with ID starting with bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4 not found: ID does not exist" containerID="bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.731984 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4"} err="failed to get container status \"bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4\": rpc error: code = NotFound desc = could not find container \"bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4\": container with ID starting with bd824e5056710934c94bd2726535406076286a5b9bce73872b0f21c16bbcf8a4 not found: ID does not exist" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.732039 5028 scope.go:117] "RemoveContainer" containerID="2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00" Nov 23 09:24:55 crc kubenswrapper[5028]: E1123 09:24:55.733278 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00\": container with ID starting with 2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00 not found: ID does not exist" containerID="2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00" Nov 23 09:24:55 crc kubenswrapper[5028]: I1123 09:24:55.733346 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00"} err="failed to get container status \"2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00\": rpc error: code = NotFound desc = could not find container \"2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00\": container with ID starting with 2af7da4532b82bb40b9ba9c70dcf62dca33e88cb6a430d96597b78f330033d00 not found: ID does not exist" Nov 23 09:24:57 crc kubenswrapper[5028]: I1123 09:24:57.070816 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" path="/var/lib/kubelet/pods/f04d2eb6-c0b8-4171-bc76-e0b547daa7f8/volumes" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.440079 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vg7t8"] Nov 23 09:25:27 crc kubenswrapper[5028]: E1123 09:25:27.442023 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="extract-utilities" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.442047 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="extract-utilities" Nov 23 09:25:27 crc kubenswrapper[5028]: E1123 09:25:27.442088 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="extract-content" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.442097 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="extract-content" Nov 23 09:25:27 crc kubenswrapper[5028]: E1123 09:25:27.442128 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="registry-server" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.442139 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="registry-server" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.442407 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04d2eb6-c0b8-4171-bc76-e0b547daa7f8" containerName="registry-server" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.444253 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.454874 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vg7t8"] Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.582783 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-utilities\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.582854 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8l6n\" (UniqueName: \"kubernetes.io/projected/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-kube-api-access-n8l6n\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.582897 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-catalog-content\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.685325 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8l6n\" (UniqueName: \"kubernetes.io/projected/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-kube-api-access-n8l6n\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.685416 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-catalog-content\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.685579 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-utilities\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.686064 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-catalog-content\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.686172 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-utilities\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.714835 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8l6n\" (UniqueName: \"kubernetes.io/projected/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-kube-api-access-n8l6n\") pod \"community-operators-vg7t8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:27 crc kubenswrapper[5028]: I1123 09:25:27.780164 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:28 crc kubenswrapper[5028]: I1123 09:25:28.396233 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vg7t8"] Nov 23 09:25:29 crc kubenswrapper[5028]: I1123 09:25:29.027088 5028 generic.go:334] "Generic (PLEG): container finished" podID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerID="26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc" exitCode=0 Nov 23 09:25:29 crc kubenswrapper[5028]: I1123 09:25:29.027165 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerDied","Data":"26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc"} Nov 23 09:25:29 crc kubenswrapper[5028]: I1123 09:25:29.027395 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerStarted","Data":"d9671f1bba32cc8cd01019aa3c95145b553081fdce0a57880ee299c202d940f3"} Nov 23 09:25:30 crc kubenswrapper[5028]: I1123 09:25:30.040438 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerStarted","Data":"9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c"} Nov 23 09:25:32 crc kubenswrapper[5028]: I1123 09:25:32.062417 5028 generic.go:334] "Generic (PLEG): container finished" podID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerID="9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c" exitCode=0 Nov 23 09:25:32 crc kubenswrapper[5028]: I1123 09:25:32.062536 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerDied","Data":"9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c"} Nov 23 09:25:33 crc kubenswrapper[5028]: I1123 09:25:33.075620 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerStarted","Data":"9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d"} Nov 23 09:25:33 crc kubenswrapper[5028]: I1123 09:25:33.101181 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vg7t8" podStartSLOduration=2.6558832629999998 podStartE2EDuration="6.101146407s" podCreationTimestamp="2025-11-23 09:25:27 +0000 UTC" firstStartedPulling="2025-11-23 09:25:29.029529851 +0000 UTC m=+9312.726934620" lastFinishedPulling="2025-11-23 09:25:32.474792965 +0000 UTC m=+9316.172197764" observedRunningTime="2025-11-23 09:25:33.091604068 +0000 UTC m=+9316.789008847" watchObservedRunningTime="2025-11-23 09:25:33.101146407 +0000 UTC m=+9316.798551186" Nov 23 09:25:37 crc kubenswrapper[5028]: I1123 09:25:37.780709 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:37 crc kubenswrapper[5028]: I1123 09:25:37.781407 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:37 crc kubenswrapper[5028]: I1123 09:25:37.842542 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:38 crc kubenswrapper[5028]: I1123 09:25:38.177879 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:38 crc kubenswrapper[5028]: I1123 09:25:38.249094 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vg7t8"] Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.149056 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vg7t8" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="registry-server" containerID="cri-o://9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d" gracePeriod=2 Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.667427 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.771413 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-catalog-content\") pod \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.771550 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8l6n\" (UniqueName: \"kubernetes.io/projected/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-kube-api-access-n8l6n\") pod \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.771617 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-utilities\") pod \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\" (UID: \"7d46ccdc-7a45-468b-98f9-085e7b40c6f8\") " Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.772486 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-utilities" (OuterVolumeSpecName: "utilities") pod "7d46ccdc-7a45-468b-98f9-085e7b40c6f8" (UID: "7d46ccdc-7a45-468b-98f9-085e7b40c6f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.777014 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-kube-api-access-n8l6n" (OuterVolumeSpecName: "kube-api-access-n8l6n") pod "7d46ccdc-7a45-468b-98f9-085e7b40c6f8" (UID: "7d46ccdc-7a45-468b-98f9-085e7b40c6f8"). InnerVolumeSpecName "kube-api-access-n8l6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.818238 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d46ccdc-7a45-468b-98f9-085e7b40c6f8" (UID: "7d46ccdc-7a45-468b-98f9-085e7b40c6f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.875293 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8l6n\" (UniqueName: \"kubernetes.io/projected/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-kube-api-access-n8l6n\") on node \"crc\" DevicePath \"\"" Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.875671 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:25:40 crc kubenswrapper[5028]: I1123 09:25:40.875737 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d46ccdc-7a45-468b-98f9-085e7b40c6f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.166507 5028 generic.go:334] "Generic (PLEG): container finished" podID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerID="9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d" exitCode=0 Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.166614 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vg7t8" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.166646 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerDied","Data":"9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d"} Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.166738 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vg7t8" event={"ID":"7d46ccdc-7a45-468b-98f9-085e7b40c6f8","Type":"ContainerDied","Data":"d9671f1bba32cc8cd01019aa3c95145b553081fdce0a57880ee299c202d940f3"} Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.166763 5028 scope.go:117] "RemoveContainer" containerID="9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.207694 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vg7t8"] Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.219035 5028 scope.go:117] "RemoveContainer" containerID="9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.232439 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vg7t8"] Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.251070 5028 scope.go:117] "RemoveContainer" containerID="26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.326745 5028 scope.go:117] "RemoveContainer" containerID="9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d" Nov 23 09:25:41 crc kubenswrapper[5028]: E1123 09:25:41.328488 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d\": container with ID starting with 9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d not found: ID does not exist" containerID="9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.328554 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d"} err="failed to get container status \"9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d\": rpc error: code = NotFound desc = could not find container \"9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d\": container with ID starting with 9fa2fa2284032e41a7322115259dafd06ecd940e00a4a8d321f9cc3369e85c5d not found: ID does not exist" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.328594 5028 scope.go:117] "RemoveContainer" containerID="9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c" Nov 23 09:25:41 crc kubenswrapper[5028]: E1123 09:25:41.329216 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c\": container with ID starting with 9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c not found: ID does not exist" containerID="9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.329276 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c"} err="failed to get container status \"9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c\": rpc error: code = NotFound desc = could not find container \"9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c\": container with ID starting with 9f54b36288da2de7c653aaa56984d2622b67d63f6bfc7c8500ef9ee732dc0d9c not found: ID does not exist" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.329316 5028 scope.go:117] "RemoveContainer" containerID="26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc" Nov 23 09:25:41 crc kubenswrapper[5028]: E1123 09:25:41.329894 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc\": container with ID starting with 26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc not found: ID does not exist" containerID="26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc" Nov 23 09:25:41 crc kubenswrapper[5028]: I1123 09:25:41.329934 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc"} err="failed to get container status \"26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc\": rpc error: code = NotFound desc = could not find container \"26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc\": container with ID starting with 26c431d51b156cd1bfb087360aa292bcd45f0e324b0e0cca1944f825b2b68bfc not found: ID does not exist" Nov 23 09:25:43 crc kubenswrapper[5028]: I1123 09:25:43.071529 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" path="/var/lib/kubelet/pods/7d46ccdc-7a45-468b-98f9-085e7b40c6f8/volumes" Nov 23 09:26:00 crc kubenswrapper[5028]: I1123 09:26:00.946167 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:26:00 crc kubenswrapper[5028]: I1123 09:26:00.946912 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:26:21 crc kubenswrapper[5028]: I1123 09:26:21.647117 5028 generic.go:334] "Generic (PLEG): container finished" podID="da2b8220-d1d8-40ae-a96a-54e3a1c13c10" containerID="37a2c6bd4ccda14f2e138433edd91cbbc8c8695df4c05a4b1cd3a604abb9c0ed" exitCode=0 Nov 23 09:26:21 crc kubenswrapper[5028]: I1123 09:26:21.647227 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" event={"ID":"da2b8220-d1d8-40ae-a96a-54e3a1c13c10","Type":"ContainerDied","Data":"37a2c6bd4ccda14f2e138433edd91cbbc8c8695df4c05a4b1cd3a604abb9c0ed"} Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.146681 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.271581 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-inventory\") pod \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.271671 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ceph\") pod \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.271705 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ssh-key\") pod \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.271828 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwrxg\" (UniqueName: \"kubernetes.io/projected/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-kube-api-access-zwrxg\") pod \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.271849 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-secret-0\") pod \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.271914 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-combined-ca-bundle\") pod \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\" (UID: \"da2b8220-d1d8-40ae-a96a-54e3a1c13c10\") " Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.278767 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-kube-api-access-zwrxg" (OuterVolumeSpecName: "kube-api-access-zwrxg") pod "da2b8220-d1d8-40ae-a96a-54e3a1c13c10" (UID: "da2b8220-d1d8-40ae-a96a-54e3a1c13c10"). InnerVolumeSpecName "kube-api-access-zwrxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.279836 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "da2b8220-d1d8-40ae-a96a-54e3a1c13c10" (UID: "da2b8220-d1d8-40ae-a96a-54e3a1c13c10"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.280293 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ceph" (OuterVolumeSpecName: "ceph") pod "da2b8220-d1d8-40ae-a96a-54e3a1c13c10" (UID: "da2b8220-d1d8-40ae-a96a-54e3a1c13c10"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.303230 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-inventory" (OuterVolumeSpecName: "inventory") pod "da2b8220-d1d8-40ae-a96a-54e3a1c13c10" (UID: "da2b8220-d1d8-40ae-a96a-54e3a1c13c10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.312602 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "da2b8220-d1d8-40ae-a96a-54e3a1c13c10" (UID: "da2b8220-d1d8-40ae-a96a-54e3a1c13c10"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.319381 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "da2b8220-d1d8-40ae-a96a-54e3a1c13c10" (UID: "da2b8220-d1d8-40ae-a96a-54e3a1c13c10"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.374465 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwrxg\" (UniqueName: \"kubernetes.io/projected/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-kube-api-access-zwrxg\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.374502 5028 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.374514 5028 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.374525 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.374534 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.374542 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da2b8220-d1d8-40ae-a96a-54e3a1c13c10-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.675588 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.675571 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-f6p9x" event={"ID":"da2b8220-d1d8-40ae-a96a-54e3a1c13c10","Type":"ContainerDied","Data":"ed2a51a6a7ef717761b3869889578b2471cc8e7474ef3a05f736f27561b9312f"} Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.676219 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2a51a6a7ef717761b3869889578b2471cc8e7474ef3a05f736f27561b9312f" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.912924 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-jz4qs"] Nov 23 09:26:23 crc kubenswrapper[5028]: E1123 09:26:23.913594 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2b8220-d1d8-40ae-a96a-54e3a1c13c10" containerName="libvirt-openstack-openstack-cell1" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.913611 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2b8220-d1d8-40ae-a96a-54e3a1c13c10" containerName="libvirt-openstack-openstack-cell1" Nov 23 09:26:23 crc kubenswrapper[5028]: E1123 09:26:23.913627 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="extract-content" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.913633 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="extract-content" Nov 23 09:26:23 crc kubenswrapper[5028]: E1123 09:26:23.913646 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="registry-server" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.913653 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="registry-server" Nov 23 09:26:23 crc kubenswrapper[5028]: E1123 09:26:23.913689 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="extract-utilities" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.913695 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="extract-utilities" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.913936 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d46ccdc-7a45-468b-98f9-085e7b40c6f8" containerName="registry-server" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.913986 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2b8220-d1d8-40ae-a96a-54e3a1c13c10" containerName="libvirt-openstack-openstack-cell1" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.914937 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.918850 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.919260 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.919283 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.919595 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.919632 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.919598 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.919707 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:26:23 crc kubenswrapper[5028]: I1123 09:26:23.929925 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-jz4qs"] Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.013157 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.013412 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.013733 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.013873 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014069 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014217 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkmv\" (UniqueName: \"kubernetes.io/projected/067cfd9c-502a-4656-b495-b43dffc143a8-kube-api-access-lnkmv\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014309 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014404 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014487 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014714 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-inventory\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.014780 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ceph\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.116532 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.116832 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.116867 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.116982 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-inventory\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117019 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ceph\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117106 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117162 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117196 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117246 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117311 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117363 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnkmv\" (UniqueName: \"kubernetes.io/projected/067cfd9c-502a-4656-b495-b43dffc143a8-kube-api-access-lnkmv\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.117750 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.118567 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.127773 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.127814 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ceph\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.129070 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.132496 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.133250 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.147506 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-inventory\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.153917 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.171742 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.175836 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnkmv\" (UniqueName: \"kubernetes.io/projected/067cfd9c-502a-4656-b495-b43dffc143a8-kube-api-access-lnkmv\") pod \"nova-cell1-openstack-openstack-cell1-jz4qs\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.248401 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:26:24 crc kubenswrapper[5028]: I1123 09:26:24.791008 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-jz4qs"] Nov 23 09:26:25 crc kubenswrapper[5028]: I1123 09:26:25.702207 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" event={"ID":"067cfd9c-502a-4656-b495-b43dffc143a8","Type":"ContainerStarted","Data":"77924b66e10cc650ed75e5655e0c03ce37cbe9ddb29bdebd6f53531e148c1a9a"} Nov 23 09:26:25 crc kubenswrapper[5028]: I1123 09:26:25.702261 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" event={"ID":"067cfd9c-502a-4656-b495-b43dffc143a8","Type":"ContainerStarted","Data":"df3072f410dc67d72ac9b87eecf7d782ab95dbc322944acab965ca09e1d8ca4d"} Nov 23 09:26:25 crc kubenswrapper[5028]: I1123 09:26:25.727336 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" podStartSLOduration=2.3102634650000002 podStartE2EDuration="2.727304674s" podCreationTimestamp="2025-11-23 09:26:23 +0000 UTC" firstStartedPulling="2025-11-23 09:26:24.795845287 +0000 UTC m=+9368.493250066" lastFinishedPulling="2025-11-23 09:26:25.212886496 +0000 UTC m=+9368.910291275" observedRunningTime="2025-11-23 09:26:25.723745044 +0000 UTC m=+9369.421149823" watchObservedRunningTime="2025-11-23 09:26:25.727304674 +0000 UTC m=+9369.424709453" Nov 23 09:26:30 crc kubenswrapper[5028]: I1123 09:26:30.946645 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:26:30 crc kubenswrapper[5028]: I1123 09:26:30.946936 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:27:00 crc kubenswrapper[5028]: I1123 09:27:00.946325 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:27:00 crc kubenswrapper[5028]: I1123 09:27:00.947198 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:27:00 crc kubenswrapper[5028]: I1123 09:27:00.947271 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:27:00 crc kubenswrapper[5028]: I1123 09:27:00.948353 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efcc42724720e9fa7c2aca2012d555e223a989f60cd99c2dbeeedbe4692d339d"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:27:00 crc kubenswrapper[5028]: I1123 09:27:00.948443 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://efcc42724720e9fa7c2aca2012d555e223a989f60cd99c2dbeeedbe4692d339d" gracePeriod=600 Nov 23 09:27:01 crc kubenswrapper[5028]: I1123 09:27:01.182583 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="efcc42724720e9fa7c2aca2012d555e223a989f60cd99c2dbeeedbe4692d339d" exitCode=0 Nov 23 09:27:01 crc kubenswrapper[5028]: I1123 09:27:01.182656 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"efcc42724720e9fa7c2aca2012d555e223a989f60cd99c2dbeeedbe4692d339d"} Nov 23 09:27:01 crc kubenswrapper[5028]: I1123 09:27:01.183295 5028 scope.go:117] "RemoveContainer" containerID="49826c1a38dc6041ff4e0f1b6872d476df01b0d40949bfef4415d13bd3848686" Nov 23 09:27:02 crc kubenswrapper[5028]: I1123 09:27:02.204846 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb"} Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.346358 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gr85v"] Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.355178 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.362476 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr85v"] Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.526694 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjvr\" (UniqueName: \"kubernetes.io/projected/e75e9a3b-073a-4d52-afbe-2cce210d1284-kube-api-access-8cjvr\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.527059 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-utilities\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.527086 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-catalog-content\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.629315 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-utilities\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.629361 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-catalog-content\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.629549 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjvr\" (UniqueName: \"kubernetes.io/projected/e75e9a3b-073a-4d52-afbe-2cce210d1284-kube-api-access-8cjvr\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.629989 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-utilities\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.630014 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-catalog-content\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.658295 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjvr\" (UniqueName: \"kubernetes.io/projected/e75e9a3b-073a-4d52-afbe-2cce210d1284-kube-api-access-8cjvr\") pod \"redhat-operators-gr85v\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:20 crc kubenswrapper[5028]: I1123 09:28:20.684306 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:21 crc kubenswrapper[5028]: I1123 09:28:21.203648 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gr85v"] Nov 23 09:28:22 crc kubenswrapper[5028]: I1123 09:28:22.108750 5028 generic.go:334] "Generic (PLEG): container finished" podID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerID="75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92" exitCode=0 Nov 23 09:28:22 crc kubenswrapper[5028]: I1123 09:28:22.108857 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerDied","Data":"75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92"} Nov 23 09:28:22 crc kubenswrapper[5028]: I1123 09:28:22.109289 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerStarted","Data":"21ab1e04d4329e380b1fce4356d4e65e95fa68f6ab817266af1874062c05e697"} Nov 23 09:28:23 crc kubenswrapper[5028]: I1123 09:28:23.123545 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerStarted","Data":"ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797"} Nov 23 09:28:28 crc kubenswrapper[5028]: I1123 09:28:28.218689 5028 generic.go:334] "Generic (PLEG): container finished" podID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerID="ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797" exitCode=0 Nov 23 09:28:28 crc kubenswrapper[5028]: I1123 09:28:28.218749 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerDied","Data":"ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797"} Nov 23 09:28:29 crc kubenswrapper[5028]: I1123 09:28:29.233990 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerStarted","Data":"8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033"} Nov 23 09:28:29 crc kubenswrapper[5028]: I1123 09:28:29.255242 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gr85v" podStartSLOduration=2.741536617 podStartE2EDuration="9.255221055s" podCreationTimestamp="2025-11-23 09:28:20 +0000 UTC" firstStartedPulling="2025-11-23 09:28:22.111039351 +0000 UTC m=+9485.808444130" lastFinishedPulling="2025-11-23 09:28:28.624723769 +0000 UTC m=+9492.322128568" observedRunningTime="2025-11-23 09:28:29.254411704 +0000 UTC m=+9492.951816483" watchObservedRunningTime="2025-11-23 09:28:29.255221055 +0000 UTC m=+9492.952625834" Nov 23 09:28:30 crc kubenswrapper[5028]: I1123 09:28:30.684903 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:30 crc kubenswrapper[5028]: I1123 09:28:30.685927 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:31 crc kubenswrapper[5028]: I1123 09:28:31.754223 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gr85v" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="registry-server" probeResult="failure" output=< Nov 23 09:28:31 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:28:31 crc kubenswrapper[5028]: > Nov 23 09:28:40 crc kubenswrapper[5028]: I1123 09:28:40.740831 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:40 crc kubenswrapper[5028]: I1123 09:28:40.803000 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:40 crc kubenswrapper[5028]: I1123 09:28:40.980450 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr85v"] Nov 23 09:28:42 crc kubenswrapper[5028]: I1123 09:28:42.692771 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gr85v" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="registry-server" containerID="cri-o://8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033" gracePeriod=2 Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.253226 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.410175 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cjvr\" (UniqueName: \"kubernetes.io/projected/e75e9a3b-073a-4d52-afbe-2cce210d1284-kube-api-access-8cjvr\") pod \"e75e9a3b-073a-4d52-afbe-2cce210d1284\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.410519 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-catalog-content\") pod \"e75e9a3b-073a-4d52-afbe-2cce210d1284\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.410596 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-utilities\") pod \"e75e9a3b-073a-4d52-afbe-2cce210d1284\" (UID: \"e75e9a3b-073a-4d52-afbe-2cce210d1284\") " Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.411925 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-utilities" (OuterVolumeSpecName: "utilities") pod "e75e9a3b-073a-4d52-afbe-2cce210d1284" (UID: "e75e9a3b-073a-4d52-afbe-2cce210d1284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.425183 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75e9a3b-073a-4d52-afbe-2cce210d1284-kube-api-access-8cjvr" (OuterVolumeSpecName: "kube-api-access-8cjvr") pod "e75e9a3b-073a-4d52-afbe-2cce210d1284" (UID: "e75e9a3b-073a-4d52-afbe-2cce210d1284"). InnerVolumeSpecName "kube-api-access-8cjvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.514078 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.514119 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cjvr\" (UniqueName: \"kubernetes.io/projected/e75e9a3b-073a-4d52-afbe-2cce210d1284-kube-api-access-8cjvr\") on node \"crc\" DevicePath \"\"" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.532605 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e75e9a3b-073a-4d52-afbe-2cce210d1284" (UID: "e75e9a3b-073a-4d52-afbe-2cce210d1284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.617592 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e75e9a3b-073a-4d52-afbe-2cce210d1284-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.706048 5028 generic.go:334] "Generic (PLEG): container finished" podID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerID="8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033" exitCode=0 Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.706110 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerDied","Data":"8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033"} Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.706143 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gr85v" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.706175 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gr85v" event={"ID":"e75e9a3b-073a-4d52-afbe-2cce210d1284","Type":"ContainerDied","Data":"21ab1e04d4329e380b1fce4356d4e65e95fa68f6ab817266af1874062c05e697"} Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.706202 5028 scope.go:117] "RemoveContainer" containerID="8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.740573 5028 scope.go:117] "RemoveContainer" containerID="ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.749653 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gr85v"] Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.769920 5028 scope.go:117] "RemoveContainer" containerID="75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.776346 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gr85v"] Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.827573 5028 scope.go:117] "RemoveContainer" containerID="8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033" Nov 23 09:28:43 crc kubenswrapper[5028]: E1123 09:28:43.828511 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033\": container with ID starting with 8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033 not found: ID does not exist" containerID="8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.828548 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033"} err="failed to get container status \"8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033\": rpc error: code = NotFound desc = could not find container \"8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033\": container with ID starting with 8904fc230eca6a7d921d7b60ec16804ae4eed1f31e23c9f06263f85aba13a033 not found: ID does not exist" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.828572 5028 scope.go:117] "RemoveContainer" containerID="ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797" Nov 23 09:28:43 crc kubenswrapper[5028]: E1123 09:28:43.830026 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797\": container with ID starting with ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797 not found: ID does not exist" containerID="ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.830050 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797"} err="failed to get container status \"ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797\": rpc error: code = NotFound desc = could not find container \"ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797\": container with ID starting with ccab46cee690a107f3d7a1da731329b5283529d5f32e4cf12b949a6440815797 not found: ID does not exist" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.830066 5028 scope.go:117] "RemoveContainer" containerID="75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92" Nov 23 09:28:43 crc kubenswrapper[5028]: E1123 09:28:43.831522 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92\": container with ID starting with 75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92 not found: ID does not exist" containerID="75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92" Nov 23 09:28:43 crc kubenswrapper[5028]: I1123 09:28:43.831565 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92"} err="failed to get container status \"75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92\": rpc error: code = NotFound desc = could not find container \"75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92\": container with ID starting with 75516d6e40e4ff533a37f7b6705bf8c2be09e6f92e6e51720b5aba9489eb6c92 not found: ID does not exist" Nov 23 09:28:45 crc kubenswrapper[5028]: I1123 09:28:45.067741 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" path="/var/lib/kubelet/pods/e75e9a3b-073a-4d52-afbe-2cce210d1284/volumes" Nov 23 09:29:30 crc kubenswrapper[5028]: I1123 09:29:30.946432 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:29:30 crc kubenswrapper[5028]: I1123 09:29:30.947179 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.185663 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2"] Nov 23 09:30:00 crc kubenswrapper[5028]: E1123 09:30:00.186845 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="extract-utilities" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.186862 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="extract-utilities" Nov 23 09:30:00 crc kubenswrapper[5028]: E1123 09:30:00.186887 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="extract-content" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.186893 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="extract-content" Nov 23 09:30:00 crc kubenswrapper[5028]: E1123 09:30:00.186913 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="registry-server" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.186919 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="registry-server" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.187197 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75e9a3b-073a-4d52-afbe-2cce210d1284" containerName="registry-server" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.188197 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.191616 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.191706 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.204441 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2"] Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.347387 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-config-volume\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.347490 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-secret-volume\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.347540 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zslm8\" (UniqueName: \"kubernetes.io/projected/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-kube-api-access-zslm8\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.449962 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-secret-volume\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.450037 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zslm8\" (UniqueName: \"kubernetes.io/projected/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-kube-api-access-zslm8\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.450192 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-config-volume\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.450935 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-config-volume\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.756547 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-secret-volume\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.759549 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zslm8\" (UniqueName: \"kubernetes.io/projected/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-kube-api-access-zslm8\") pod \"collect-profiles-29398170-2j7d2\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.814392 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.947397 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:30:00 crc kubenswrapper[5028]: I1123 09:30:00.947525 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:30:01 crc kubenswrapper[5028]: I1123 09:30:01.408946 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2"] Nov 23 09:30:02 crc kubenswrapper[5028]: I1123 09:30:02.420759 5028 generic.go:334] "Generic (PLEG): container finished" podID="bd4f83ee-b756-4bec-be49-a05b0efd2ea1" containerID="9a15b30dbb1ce294503e9b0a957af98cf05812b9a12f88a58eb42e50f5b21546" exitCode=0 Nov 23 09:30:02 crc kubenswrapper[5028]: I1123 09:30:02.420831 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" event={"ID":"bd4f83ee-b756-4bec-be49-a05b0efd2ea1","Type":"ContainerDied","Data":"9a15b30dbb1ce294503e9b0a957af98cf05812b9a12f88a58eb42e50f5b21546"} Nov 23 09:30:02 crc kubenswrapper[5028]: I1123 09:30:02.421123 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" event={"ID":"bd4f83ee-b756-4bec-be49-a05b0efd2ea1","Type":"ContainerStarted","Data":"6e87c4c9b3ede7a636df58c8fdf13c1de8644a4ce83ac63afe3d8fcd5f2e0a6e"} Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.434839 5028 generic.go:334] "Generic (PLEG): container finished" podID="067cfd9c-502a-4656-b495-b43dffc143a8" containerID="77924b66e10cc650ed75e5655e0c03ce37cbe9ddb29bdebd6f53531e148c1a9a" exitCode=0 Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.435034 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" event={"ID":"067cfd9c-502a-4656-b495-b43dffc143a8","Type":"ContainerDied","Data":"77924b66e10cc650ed75e5655e0c03ce37cbe9ddb29bdebd6f53531e148c1a9a"} Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.815182 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.926255 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-secret-volume\") pod \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.926614 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zslm8\" (UniqueName: \"kubernetes.io/projected/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-kube-api-access-zslm8\") pod \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.926636 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-config-volume\") pod \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\" (UID: \"bd4f83ee-b756-4bec-be49-a05b0efd2ea1\") " Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.928483 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd4f83ee-b756-4bec-be49-a05b0efd2ea1" (UID: "bd4f83ee-b756-4bec-be49-a05b0efd2ea1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.932404 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd4f83ee-b756-4bec-be49-a05b0efd2ea1" (UID: "bd4f83ee-b756-4bec-be49-a05b0efd2ea1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:03 crc kubenswrapper[5028]: I1123 09:30:03.932920 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-kube-api-access-zslm8" (OuterVolumeSpecName: "kube-api-access-zslm8") pod "bd4f83ee-b756-4bec-be49-a05b0efd2ea1" (UID: "bd4f83ee-b756-4bec-be49-a05b0efd2ea1"). InnerVolumeSpecName "kube-api-access-zslm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.029599 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.029668 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zslm8\" (UniqueName: \"kubernetes.io/projected/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-kube-api-access-zslm8\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.029683 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd4f83ee-b756-4bec-be49-a05b0efd2ea1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.448105 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.448503 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2" event={"ID":"bd4f83ee-b756-4bec-be49-a05b0efd2ea1","Type":"ContainerDied","Data":"6e87c4c9b3ede7a636df58c8fdf13c1de8644a4ce83ac63afe3d8fcd5f2e0a6e"} Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.448554 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e87c4c9b3ede7a636df58c8fdf13c1de8644a4ce83ac63afe3d8fcd5f2e0a6e" Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.897835 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm"] Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.909346 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398125-fsnxm"] Nov 23 09:30:04 crc kubenswrapper[5028]: I1123 09:30:04.934011 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.051287 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-0\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.051424 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-0\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052255 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-0\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052286 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ceph\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052339 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-1\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052367 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-inventory\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052395 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-1\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052429 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnkmv\" (UniqueName: \"kubernetes.io/projected/067cfd9c-502a-4656-b495-b43dffc143a8-kube-api-access-lnkmv\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052491 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-1\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052584 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-combined-ca-bundle\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.052637 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ssh-key\") pod \"067cfd9c-502a-4656-b495-b43dffc143a8\" (UID: \"067cfd9c-502a-4656-b495-b43dffc143a8\") " Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.058888 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067cfd9c-502a-4656-b495-b43dffc143a8-kube-api-access-lnkmv" (OuterVolumeSpecName: "kube-api-access-lnkmv") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "kube-api-access-lnkmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.061333 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ceph" (OuterVolumeSpecName: "ceph") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.068390 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bf389f-30a9-4a74-a931-e8a28b61f7f6" path="/var/lib/kubelet/pods/51bf389f-30a9-4a74-a931-e8a28b61f7f6/volumes" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.070989 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.082540 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.084232 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.084962 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.090940 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.091045 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.098005 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.109366 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-inventory" (OuterVolumeSpecName: "inventory") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.111031 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "067cfd9c-502a-4656-b495-b43dffc143a8" (UID: "067cfd9c-502a-4656-b495-b43dffc143a8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157356 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157461 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157481 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnkmv\" (UniqueName: \"kubernetes.io/projected/067cfd9c-502a-4656-b495-b43dffc143a8-kube-api-access-lnkmv\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157495 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157551 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157569 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157587 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157600 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157639 5028 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157654 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.157666 5028 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/067cfd9c-502a-4656-b495-b43dffc143a8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.461903 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" event={"ID":"067cfd9c-502a-4656-b495-b43dffc143a8","Type":"ContainerDied","Data":"df3072f410dc67d72ac9b87eecf7d782ab95dbc322944acab965ca09e1d8ca4d"} Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.461974 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3072f410dc67d72ac9b87eecf7d782ab95dbc322944acab965ca09e1d8ca4d" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.462066 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-jz4qs" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.629393 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hxr5g"] Nov 23 09:30:05 crc kubenswrapper[5028]: E1123 09:30:05.630070 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067cfd9c-502a-4656-b495-b43dffc143a8" containerName="nova-cell1-openstack-openstack-cell1" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.630093 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="067cfd9c-502a-4656-b495-b43dffc143a8" containerName="nova-cell1-openstack-openstack-cell1" Nov 23 09:30:05 crc kubenswrapper[5028]: E1123 09:30:05.630118 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f83ee-b756-4bec-be49-a05b0efd2ea1" containerName="collect-profiles" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.630126 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f83ee-b756-4bec-be49-a05b0efd2ea1" containerName="collect-profiles" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.630368 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="067cfd9c-502a-4656-b495-b43dffc143a8" containerName="nova-cell1-openstack-openstack-cell1" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.630398 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4f83ee-b756-4bec-be49-a05b0efd2ea1" containerName="collect-profiles" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.631299 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.633705 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.636134 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.636167 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.636408 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.636557 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.647895 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hxr5g"] Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.777501 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceph\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.777861 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.778173 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lx9\" (UniqueName: \"kubernetes.io/projected/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-kube-api-access-c7lx9\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.778573 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.778749 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-inventory\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.778849 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.779090 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.779162 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881294 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-inventory\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881354 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881412 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881440 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881518 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceph\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881565 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881604 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7lx9\" (UniqueName: \"kubernetes.io/projected/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-kube-api-access-c7lx9\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.881680 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.888045 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.888045 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.888058 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-inventory\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.888908 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.892137 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.892431 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceph\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.894192 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.898424 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7lx9\" (UniqueName: \"kubernetes.io/projected/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-kube-api-access-c7lx9\") pod \"telemetry-openstack-openstack-cell1-hxr5g\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:05 crc kubenswrapper[5028]: I1123 09:30:05.986467 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:30:06 crc kubenswrapper[5028]: I1123 09:30:06.609722 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hxr5g"] Nov 23 09:30:06 crc kubenswrapper[5028]: I1123 09:30:06.614467 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:30:07 crc kubenswrapper[5028]: I1123 09:30:07.494120 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" event={"ID":"2a7e62e6-7eca-4f20-821f-fc8c61b58dda","Type":"ContainerStarted","Data":"934c3a9fc3f785aa33ff2f26864726c24176b9549e67ef0474eaf8871c4c8303"} Nov 23 09:30:08 crc kubenswrapper[5028]: I1123 09:30:08.505912 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" event={"ID":"2a7e62e6-7eca-4f20-821f-fc8c61b58dda","Type":"ContainerStarted","Data":"56234831d0b6ed8f1409616ed7138968c487f5c29c9e96874a6f9492a2233bf4"} Nov 23 09:30:08 crc kubenswrapper[5028]: I1123 09:30:08.529694 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" podStartSLOduration=2.749291771 podStartE2EDuration="3.529659292s" podCreationTimestamp="2025-11-23 09:30:05 +0000 UTC" firstStartedPulling="2025-11-23 09:30:06.613868075 +0000 UTC m=+9590.311272854" lastFinishedPulling="2025-11-23 09:30:07.394235586 +0000 UTC m=+9591.091640375" observedRunningTime="2025-11-23 09:30:08.522971825 +0000 UTC m=+9592.220376604" watchObservedRunningTime="2025-11-23 09:30:08.529659292 +0000 UTC m=+9592.227064061" Nov 23 09:30:30 crc kubenswrapper[5028]: I1123 09:30:30.946372 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:30:30 crc kubenswrapper[5028]: I1123 09:30:30.946999 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:30:30 crc kubenswrapper[5028]: I1123 09:30:30.947051 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:30:30 crc kubenswrapper[5028]: I1123 09:30:30.947815 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:30:30 crc kubenswrapper[5028]: I1123 09:30:30.947899 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" gracePeriod=600 Nov 23 09:30:31 crc kubenswrapper[5028]: E1123 09:30:31.082896 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:30:31 crc kubenswrapper[5028]: I1123 09:30:31.771811 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" exitCode=0 Nov 23 09:30:31 crc kubenswrapper[5028]: I1123 09:30:31.771854 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb"} Nov 23 09:30:31 crc kubenswrapper[5028]: I1123 09:30:31.771889 5028 scope.go:117] "RemoveContainer" containerID="efcc42724720e9fa7c2aca2012d555e223a989f60cd99c2dbeeedbe4692d339d" Nov 23 09:30:31 crc kubenswrapper[5028]: I1123 09:30:31.772527 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:30:31 crc kubenswrapper[5028]: E1123 09:30:31.772878 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:30:32 crc kubenswrapper[5028]: I1123 09:30:32.939015 5028 scope.go:117] "RemoveContainer" containerID="20d04bd511a431b08cc0bdd8197803aa7cfeecede596d6c7f279ebdc69e86030" Nov 23 09:30:46 crc kubenswrapper[5028]: I1123 09:30:46.053482 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:30:46 crc kubenswrapper[5028]: E1123 09:30:46.054772 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:30:59 crc kubenswrapper[5028]: I1123 09:30:59.053200 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:30:59 crc kubenswrapper[5028]: E1123 09:30:59.054015 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:31:13 crc kubenswrapper[5028]: I1123 09:31:13.053780 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:31:13 crc kubenswrapper[5028]: E1123 09:31:13.054800 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:31:27 crc kubenswrapper[5028]: I1123 09:31:27.062220 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:31:27 crc kubenswrapper[5028]: E1123 09:31:27.063025 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:31:41 crc kubenswrapper[5028]: I1123 09:31:41.054621 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:31:41 crc kubenswrapper[5028]: E1123 09:31:41.055814 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:31:54 crc kubenswrapper[5028]: I1123 09:31:54.055421 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:31:54 crc kubenswrapper[5028]: E1123 09:31:54.057185 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:32:09 crc kubenswrapper[5028]: I1123 09:32:09.054108 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:32:09 crc kubenswrapper[5028]: E1123 09:32:09.055066 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:32:24 crc kubenswrapper[5028]: I1123 09:32:24.052906 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:32:24 crc kubenswrapper[5028]: E1123 09:32:24.054909 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:32:36 crc kubenswrapper[5028]: I1123 09:32:36.053288 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:32:36 crc kubenswrapper[5028]: E1123 09:32:36.054117 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:32:49 crc kubenswrapper[5028]: I1123 09:32:49.053262 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:32:49 crc kubenswrapper[5028]: E1123 09:32:49.054132 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:33:00 crc kubenswrapper[5028]: I1123 09:33:00.053538 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:33:00 crc kubenswrapper[5028]: E1123 09:33:00.054343 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.525263 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tz5g7"] Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.528427 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.536310 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz5g7"] Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.594450 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn6r5\" (UniqueName: \"kubernetes.io/projected/ce91a66e-ca27-4afb-a47b-66726fac4c66-kube-api-access-vn6r5\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.594503 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-utilities\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.594525 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-catalog-content\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.696748 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn6r5\" (UniqueName: \"kubernetes.io/projected/ce91a66e-ca27-4afb-a47b-66726fac4c66-kube-api-access-vn6r5\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.696799 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-utilities\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.696816 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-catalog-content\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.697354 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-catalog-content\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.697520 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-utilities\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.717673 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn6r5\" (UniqueName: \"kubernetes.io/projected/ce91a66e-ca27-4afb-a47b-66726fac4c66-kube-api-access-vn6r5\") pod \"certified-operators-tz5g7\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:05 crc kubenswrapper[5028]: I1123 09:33:05.859735 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:06 crc kubenswrapper[5028]: I1123 09:33:06.427070 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tz5g7"] Nov 23 09:33:06 crc kubenswrapper[5028]: I1123 09:33:06.589363 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerStarted","Data":"a2eb32ea9a381e41a91273753a2a5b8616ae90930b33f067c3f2a38e2c71981f"} Nov 23 09:33:07 crc kubenswrapper[5028]: I1123 09:33:07.606102 5028 generic.go:334] "Generic (PLEG): container finished" podID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerID="84beffcd3a15939657c8d14ad22e31575f391f6fe2a5096483ba0ecd7c3cacc1" exitCode=0 Nov 23 09:33:07 crc kubenswrapper[5028]: I1123 09:33:07.606333 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerDied","Data":"84beffcd3a15939657c8d14ad22e31575f391f6fe2a5096483ba0ecd7c3cacc1"} Nov 23 09:33:08 crc kubenswrapper[5028]: I1123 09:33:08.620143 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerStarted","Data":"1d8e8a2d5722e375d3d2d9a07e195f10126568f5079b1754f1dc17fe911687ff"} Nov 23 09:33:09 crc kubenswrapper[5028]: I1123 09:33:09.633737 5028 generic.go:334] "Generic (PLEG): container finished" podID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerID="1d8e8a2d5722e375d3d2d9a07e195f10126568f5079b1754f1dc17fe911687ff" exitCode=0 Nov 23 09:33:09 crc kubenswrapper[5028]: I1123 09:33:09.633841 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerDied","Data":"1d8e8a2d5722e375d3d2d9a07e195f10126568f5079b1754f1dc17fe911687ff"} Nov 23 09:33:10 crc kubenswrapper[5028]: I1123 09:33:10.657682 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerStarted","Data":"337b643539db249152a1a93162e623a535c3dcd1bf7ed5b8a06edeeca81e3705"} Nov 23 09:33:10 crc kubenswrapper[5028]: I1123 09:33:10.676779 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tz5g7" podStartSLOduration=3.247883875 podStartE2EDuration="5.676759805s" podCreationTimestamp="2025-11-23 09:33:05 +0000 UTC" firstStartedPulling="2025-11-23 09:33:07.60865438 +0000 UTC m=+9771.306059159" lastFinishedPulling="2025-11-23 09:33:10.03753029 +0000 UTC m=+9773.734935089" observedRunningTime="2025-11-23 09:33:10.675099444 +0000 UTC m=+9774.372504223" watchObservedRunningTime="2025-11-23 09:33:10.676759805 +0000 UTC m=+9774.374164584" Nov 23 09:33:14 crc kubenswrapper[5028]: I1123 09:33:14.053297 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:33:14 crc kubenswrapper[5028]: E1123 09:33:14.053902 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:33:15 crc kubenswrapper[5028]: I1123 09:33:15.860199 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:15 crc kubenswrapper[5028]: I1123 09:33:15.861419 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:15 crc kubenswrapper[5028]: I1123 09:33:15.914823 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:16 crc kubenswrapper[5028]: I1123 09:33:16.814410 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:16 crc kubenswrapper[5028]: I1123 09:33:16.879472 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz5g7"] Nov 23 09:33:18 crc kubenswrapper[5028]: I1123 09:33:18.774675 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tz5g7" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="registry-server" containerID="cri-o://337b643539db249152a1a93162e623a535c3dcd1bf7ed5b8a06edeeca81e3705" gracePeriod=2 Nov 23 09:33:19 crc kubenswrapper[5028]: I1123 09:33:19.790669 5028 generic.go:334] "Generic (PLEG): container finished" podID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerID="337b643539db249152a1a93162e623a535c3dcd1bf7ed5b8a06edeeca81e3705" exitCode=0 Nov 23 09:33:19 crc kubenswrapper[5028]: I1123 09:33:19.790674 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerDied","Data":"337b643539db249152a1a93162e623a535c3dcd1bf7ed5b8a06edeeca81e3705"} Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.016726 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.109821 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn6r5\" (UniqueName: \"kubernetes.io/projected/ce91a66e-ca27-4afb-a47b-66726fac4c66-kube-api-access-vn6r5\") pod \"ce91a66e-ca27-4afb-a47b-66726fac4c66\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.109893 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-utilities\") pod \"ce91a66e-ca27-4afb-a47b-66726fac4c66\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.110125 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-catalog-content\") pod \"ce91a66e-ca27-4afb-a47b-66726fac4c66\" (UID: \"ce91a66e-ca27-4afb-a47b-66726fac4c66\") " Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.110960 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-utilities" (OuterVolumeSpecName: "utilities") pod "ce91a66e-ca27-4afb-a47b-66726fac4c66" (UID: "ce91a66e-ca27-4afb-a47b-66726fac4c66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.116234 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce91a66e-ca27-4afb-a47b-66726fac4c66-kube-api-access-vn6r5" (OuterVolumeSpecName: "kube-api-access-vn6r5") pod "ce91a66e-ca27-4afb-a47b-66726fac4c66" (UID: "ce91a66e-ca27-4afb-a47b-66726fac4c66"). InnerVolumeSpecName "kube-api-access-vn6r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.154657 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce91a66e-ca27-4afb-a47b-66726fac4c66" (UID: "ce91a66e-ca27-4afb-a47b-66726fac4c66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.212970 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.213125 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn6r5\" (UniqueName: \"kubernetes.io/projected/ce91a66e-ca27-4afb-a47b-66726fac4c66-kube-api-access-vn6r5\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.213149 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce91a66e-ca27-4afb-a47b-66726fac4c66-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.805895 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tz5g7" event={"ID":"ce91a66e-ca27-4afb-a47b-66726fac4c66","Type":"ContainerDied","Data":"a2eb32ea9a381e41a91273753a2a5b8616ae90930b33f067c3f2a38e2c71981f"} Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.805979 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tz5g7" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.806293 5028 scope.go:117] "RemoveContainer" containerID="337b643539db249152a1a93162e623a535c3dcd1bf7ed5b8a06edeeca81e3705" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.828929 5028 scope.go:117] "RemoveContainer" containerID="1d8e8a2d5722e375d3d2d9a07e195f10126568f5079b1754f1dc17fe911687ff" Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.844987 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tz5g7"] Nov 23 09:33:20 crc kubenswrapper[5028]: I1123 09:33:20.859624 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tz5g7"] Nov 23 09:33:21 crc kubenswrapper[5028]: I1123 09:33:21.064639 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" path="/var/lib/kubelet/pods/ce91a66e-ca27-4afb-a47b-66726fac4c66/volumes" Nov 23 09:33:21 crc kubenswrapper[5028]: I1123 09:33:21.179810 5028 scope.go:117] "RemoveContainer" containerID="84beffcd3a15939657c8d14ad22e31575f391f6fe2a5096483ba0ecd7c3cacc1" Nov 23 09:33:29 crc kubenswrapper[5028]: I1123 09:33:29.053355 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:33:29 crc kubenswrapper[5028]: E1123 09:33:29.055571 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:33:41 crc kubenswrapper[5028]: I1123 09:33:41.054237 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:33:41 crc kubenswrapper[5028]: E1123 09:33:41.055102 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:33:55 crc kubenswrapper[5028]: I1123 09:33:55.053299 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:33:55 crc kubenswrapper[5028]: E1123 09:33:55.054054 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:34:08 crc kubenswrapper[5028]: I1123 09:34:08.053542 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:34:08 crc kubenswrapper[5028]: E1123 09:34:08.054286 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:34:22 crc kubenswrapper[5028]: I1123 09:34:22.054037 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:34:22 crc kubenswrapper[5028]: E1123 09:34:22.055388 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:34:37 crc kubenswrapper[5028]: I1123 09:34:37.067186 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:34:37 crc kubenswrapper[5028]: E1123 09:34:37.068575 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.181938 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g78vn"] Nov 23 09:34:42 crc kubenswrapper[5028]: E1123 09:34:42.183802 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="extract-content" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.183825 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="extract-content" Nov 23 09:34:42 crc kubenswrapper[5028]: E1123 09:34:42.183839 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="registry-server" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.183847 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="registry-server" Nov 23 09:34:42 crc kubenswrapper[5028]: E1123 09:34:42.183902 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="extract-utilities" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.183916 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="extract-utilities" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.184272 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce91a66e-ca27-4afb-a47b-66726fac4c66" containerName="registry-server" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.186523 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.197781 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g78vn"] Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.232941 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-catalog-content\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.233190 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-utilities\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.233341 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxtkd\" (UniqueName: \"kubernetes.io/projected/cf2c4253-70a6-4a20-b003-7885c031a1ee-kube-api-access-sxtkd\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.336336 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxtkd\" (UniqueName: \"kubernetes.io/projected/cf2c4253-70a6-4a20-b003-7885c031a1ee-kube-api-access-sxtkd\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.336443 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-catalog-content\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.336691 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-utilities\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.337018 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-catalog-content\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.337434 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-utilities\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.366634 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxtkd\" (UniqueName: \"kubernetes.io/projected/cf2c4253-70a6-4a20-b003-7885c031a1ee-kube-api-access-sxtkd\") pod \"redhat-marketplace-g78vn\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:42 crc kubenswrapper[5028]: I1123 09:34:42.525724 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:43 crc kubenswrapper[5028]: I1123 09:34:43.051558 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g78vn"] Nov 23 09:34:43 crc kubenswrapper[5028]: I1123 09:34:43.848976 5028 generic.go:334] "Generic (PLEG): container finished" podID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerID="ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e" exitCode=0 Nov 23 09:34:43 crc kubenswrapper[5028]: I1123 09:34:43.849164 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerDied","Data":"ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e"} Nov 23 09:34:43 crc kubenswrapper[5028]: I1123 09:34:43.849309 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerStarted","Data":"e9216c462b992813dc2679d07eaa77a762e69e06c7f989e1074befe4954f3226"} Nov 23 09:34:44 crc kubenswrapper[5028]: I1123 09:34:44.860792 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerStarted","Data":"375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308"} Nov 23 09:34:45 crc kubenswrapper[5028]: I1123 09:34:45.873038 5028 generic.go:334] "Generic (PLEG): container finished" podID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerID="375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308" exitCode=0 Nov 23 09:34:45 crc kubenswrapper[5028]: I1123 09:34:45.873143 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerDied","Data":"375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308"} Nov 23 09:34:47 crc kubenswrapper[5028]: I1123 09:34:47.916045 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerStarted","Data":"21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb"} Nov 23 09:34:47 crc kubenswrapper[5028]: I1123 09:34:47.952588 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g78vn" podStartSLOduration=3.110087421 podStartE2EDuration="5.952565643s" podCreationTimestamp="2025-11-23 09:34:42 +0000 UTC" firstStartedPulling="2025-11-23 09:34:43.851252543 +0000 UTC m=+9867.548657332" lastFinishedPulling="2025-11-23 09:34:46.693730775 +0000 UTC m=+9870.391135554" observedRunningTime="2025-11-23 09:34:47.937315681 +0000 UTC m=+9871.634720480" watchObservedRunningTime="2025-11-23 09:34:47.952565643 +0000 UTC m=+9871.649970412" Nov 23 09:34:49 crc kubenswrapper[5028]: I1123 09:34:49.054204 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:34:49 crc kubenswrapper[5028]: E1123 09:34:49.054577 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:34:52 crc kubenswrapper[5028]: I1123 09:34:52.526151 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:52 crc kubenswrapper[5028]: I1123 09:34:52.526603 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:52 crc kubenswrapper[5028]: I1123 09:34:52.589039 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:53 crc kubenswrapper[5028]: I1123 09:34:53.035364 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:53 crc kubenswrapper[5028]: I1123 09:34:53.093110 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g78vn"] Nov 23 09:34:54 crc kubenswrapper[5028]: I1123 09:34:54.993386 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g78vn" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="registry-server" containerID="cri-o://21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb" gracePeriod=2 Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.486455 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.543934 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-utilities\") pod \"cf2c4253-70a6-4a20-b003-7885c031a1ee\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.545481 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-utilities" (OuterVolumeSpecName: "utilities") pod "cf2c4253-70a6-4a20-b003-7885c031a1ee" (UID: "cf2c4253-70a6-4a20-b003-7885c031a1ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.544402 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-catalog-content\") pod \"cf2c4253-70a6-4a20-b003-7885c031a1ee\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.551458 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxtkd\" (UniqueName: \"kubernetes.io/projected/cf2c4253-70a6-4a20-b003-7885c031a1ee-kube-api-access-sxtkd\") pod \"cf2c4253-70a6-4a20-b003-7885c031a1ee\" (UID: \"cf2c4253-70a6-4a20-b003-7885c031a1ee\") " Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.552593 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.556907 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2c4253-70a6-4a20-b003-7885c031a1ee-kube-api-access-sxtkd" (OuterVolumeSpecName: "kube-api-access-sxtkd") pod "cf2c4253-70a6-4a20-b003-7885c031a1ee" (UID: "cf2c4253-70a6-4a20-b003-7885c031a1ee"). InnerVolumeSpecName "kube-api-access-sxtkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.568779 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf2c4253-70a6-4a20-b003-7885c031a1ee" (UID: "cf2c4253-70a6-4a20-b003-7885c031a1ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.654859 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxtkd\" (UniqueName: \"kubernetes.io/projected/cf2c4253-70a6-4a20-b003-7885c031a1ee-kube-api-access-sxtkd\") on node \"crc\" DevicePath \"\"" Nov 23 09:34:55 crc kubenswrapper[5028]: I1123 09:34:55.655143 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf2c4253-70a6-4a20-b003-7885c031a1ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.005854 5028 generic.go:334] "Generic (PLEG): container finished" podID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerID="21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb" exitCode=0 Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.005906 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g78vn" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.005926 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerDied","Data":"21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb"} Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.006962 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g78vn" event={"ID":"cf2c4253-70a6-4a20-b003-7885c031a1ee","Type":"ContainerDied","Data":"e9216c462b992813dc2679d07eaa77a762e69e06c7f989e1074befe4954f3226"} Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.006989 5028 scope.go:117] "RemoveContainer" containerID="21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.050375 5028 scope.go:117] "RemoveContainer" containerID="375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.055793 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g78vn"] Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.070821 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g78vn"] Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.081576 5028 scope.go:117] "RemoveContainer" containerID="ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.122234 5028 scope.go:117] "RemoveContainer" containerID="21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb" Nov 23 09:34:56 crc kubenswrapper[5028]: E1123 09:34:56.122723 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb\": container with ID starting with 21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb not found: ID does not exist" containerID="21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.122755 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb"} err="failed to get container status \"21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb\": rpc error: code = NotFound desc = could not find container \"21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb\": container with ID starting with 21a8f761ed3df6f508d431d1a0360c49a13a3748e28cd026254e3280036ce3cb not found: ID does not exist" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.122777 5028 scope.go:117] "RemoveContainer" containerID="375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308" Nov 23 09:34:56 crc kubenswrapper[5028]: E1123 09:34:56.123536 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308\": container with ID starting with 375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308 not found: ID does not exist" containerID="375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.123600 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308"} err="failed to get container status \"375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308\": rpc error: code = NotFound desc = could not find container \"375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308\": container with ID starting with 375bfc15e52987df94464b0eebc9e7238ea5cc9bded8a3ef6b7e802206d33308 not found: ID does not exist" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.123616 5028 scope.go:117] "RemoveContainer" containerID="ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e" Nov 23 09:34:56 crc kubenswrapper[5028]: E1123 09:34:56.123870 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e\": container with ID starting with ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e not found: ID does not exist" containerID="ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e" Nov 23 09:34:56 crc kubenswrapper[5028]: I1123 09:34:56.123896 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e"} err="failed to get container status \"ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e\": rpc error: code = NotFound desc = could not find container \"ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e\": container with ID starting with ee9ccf7f2b78f1948ec6995b00664648197b71b34ef3e7aee1a1db2e4d864e3e not found: ID does not exist" Nov 23 09:34:57 crc kubenswrapper[5028]: I1123 09:34:57.067785 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" path="/var/lib/kubelet/pods/cf2c4253-70a6-4a20-b003-7885c031a1ee/volumes" Nov 23 09:35:02 crc kubenswrapper[5028]: I1123 09:35:02.054311 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:35:02 crc kubenswrapper[5028]: E1123 09:35:02.055467 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:35:17 crc kubenswrapper[5028]: I1123 09:35:17.063419 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:35:17 crc kubenswrapper[5028]: E1123 09:35:17.064446 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:35:31 crc kubenswrapper[5028]: I1123 09:35:31.053932 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:35:31 crc kubenswrapper[5028]: I1123 09:35:31.445066 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"c0f1635254f1021f75a78a4d090c542766ebc56f9caf285d4a9adad855a050d2"} Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.061556 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p4wj4"] Nov 23 09:35:48 crc kubenswrapper[5028]: E1123 09:35:48.063582 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="extract-content" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.063602 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="extract-content" Nov 23 09:35:48 crc kubenswrapper[5028]: E1123 09:35:48.063634 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="registry-server" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.063642 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="registry-server" Nov 23 09:35:48 crc kubenswrapper[5028]: E1123 09:35:48.063659 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="extract-utilities" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.063667 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="extract-utilities" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.063914 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2c4253-70a6-4a20-b003-7885c031a1ee" containerName="registry-server" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.065944 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.090853 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p4wj4"] Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.218351 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-catalog-content\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.218467 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-utilities\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.218557 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx69j\" (UniqueName: \"kubernetes.io/projected/6e43452c-6a32-4965-a0fa-4af15b59d71f-kube-api-access-tx69j\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.321045 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-catalog-content\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.321130 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-utilities\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.321180 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx69j\" (UniqueName: \"kubernetes.io/projected/6e43452c-6a32-4965-a0fa-4af15b59d71f-kube-api-access-tx69j\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.321831 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-utilities\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.322054 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-catalog-content\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.344289 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx69j\" (UniqueName: \"kubernetes.io/projected/6e43452c-6a32-4965-a0fa-4af15b59d71f-kube-api-access-tx69j\") pod \"community-operators-p4wj4\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:48 crc kubenswrapper[5028]: I1123 09:35:48.407353 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:49 crc kubenswrapper[5028]: I1123 09:35:49.065172 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p4wj4"] Nov 23 09:35:49 crc kubenswrapper[5028]: I1123 09:35:49.705184 5028 generic.go:334] "Generic (PLEG): container finished" podID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerID="4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375" exitCode=0 Nov 23 09:35:49 crc kubenswrapper[5028]: I1123 09:35:49.705288 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerDied","Data":"4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375"} Nov 23 09:35:49 crc kubenswrapper[5028]: I1123 09:35:49.705585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerStarted","Data":"2daa248f7385a725f6b844f1efa4e424d025f663528ab3a7e02fd838c859dbb5"} Nov 23 09:35:49 crc kubenswrapper[5028]: I1123 09:35:49.712697 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:35:50 crc kubenswrapper[5028]: I1123 09:35:50.719398 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerStarted","Data":"683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5"} Nov 23 09:35:52 crc kubenswrapper[5028]: I1123 09:35:52.745348 5028 generic.go:334] "Generic (PLEG): container finished" podID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerID="683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5" exitCode=0 Nov 23 09:35:52 crc kubenswrapper[5028]: I1123 09:35:52.745436 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerDied","Data":"683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5"} Nov 23 09:35:53 crc kubenswrapper[5028]: I1123 09:35:53.762274 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerStarted","Data":"89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d"} Nov 23 09:35:53 crc kubenswrapper[5028]: I1123 09:35:53.794604 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p4wj4" podStartSLOduration=2.350944234 podStartE2EDuration="5.794580307s" podCreationTimestamp="2025-11-23 09:35:48 +0000 UTC" firstStartedPulling="2025-11-23 09:35:49.712272154 +0000 UTC m=+9933.409676943" lastFinishedPulling="2025-11-23 09:35:53.155908237 +0000 UTC m=+9936.853313016" observedRunningTime="2025-11-23 09:35:53.786270509 +0000 UTC m=+9937.483675308" watchObservedRunningTime="2025-11-23 09:35:53.794580307 +0000 UTC m=+9937.491985086" Nov 23 09:35:58 crc kubenswrapper[5028]: I1123 09:35:58.407509 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:58 crc kubenswrapper[5028]: I1123 09:35:58.408519 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:58 crc kubenswrapper[5028]: I1123 09:35:58.472037 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:58 crc kubenswrapper[5028]: I1123 09:35:58.884515 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:35:58 crc kubenswrapper[5028]: I1123 09:35:58.949149 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p4wj4"] Nov 23 09:36:00 crc kubenswrapper[5028]: I1123 09:36:00.861516 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p4wj4" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="registry-server" containerID="cri-o://89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d" gracePeriod=2 Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.409154 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.464495 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx69j\" (UniqueName: \"kubernetes.io/projected/6e43452c-6a32-4965-a0fa-4af15b59d71f-kube-api-access-tx69j\") pod \"6e43452c-6a32-4965-a0fa-4af15b59d71f\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.464590 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-utilities\") pod \"6e43452c-6a32-4965-a0fa-4af15b59d71f\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.464682 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-catalog-content\") pod \"6e43452c-6a32-4965-a0fa-4af15b59d71f\" (UID: \"6e43452c-6a32-4965-a0fa-4af15b59d71f\") " Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.469869 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-utilities" (OuterVolumeSpecName: "utilities") pod "6e43452c-6a32-4965-a0fa-4af15b59d71f" (UID: "6e43452c-6a32-4965-a0fa-4af15b59d71f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.487382 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e43452c-6a32-4965-a0fa-4af15b59d71f-kube-api-access-tx69j" (OuterVolumeSpecName: "kube-api-access-tx69j") pod "6e43452c-6a32-4965-a0fa-4af15b59d71f" (UID: "6e43452c-6a32-4965-a0fa-4af15b59d71f"). InnerVolumeSpecName "kube-api-access-tx69j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.523664 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e43452c-6a32-4965-a0fa-4af15b59d71f" (UID: "6e43452c-6a32-4965-a0fa-4af15b59d71f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.567284 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx69j\" (UniqueName: \"kubernetes.io/projected/6e43452c-6a32-4965-a0fa-4af15b59d71f-kube-api-access-tx69j\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.567320 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.567331 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e43452c-6a32-4965-a0fa-4af15b59d71f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.886520 5028 generic.go:334] "Generic (PLEG): container finished" podID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerID="89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d" exitCode=0 Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.886592 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerDied","Data":"89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d"} Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.886665 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p4wj4" event={"ID":"6e43452c-6a32-4965-a0fa-4af15b59d71f","Type":"ContainerDied","Data":"2daa248f7385a725f6b844f1efa4e424d025f663528ab3a7e02fd838c859dbb5"} Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.886619 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p4wj4" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.886729 5028 scope.go:117] "RemoveContainer" containerID="89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.932021 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p4wj4"] Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.933761 5028 scope.go:117] "RemoveContainer" containerID="683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5" Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.942775 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p4wj4"] Nov 23 09:36:01 crc kubenswrapper[5028]: I1123 09:36:01.962635 5028 scope.go:117] "RemoveContainer" containerID="4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375" Nov 23 09:36:02 crc kubenswrapper[5028]: I1123 09:36:02.023649 5028 scope.go:117] "RemoveContainer" containerID="89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d" Nov 23 09:36:02 crc kubenswrapper[5028]: E1123 09:36:02.024481 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d\": container with ID starting with 89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d not found: ID does not exist" containerID="89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d" Nov 23 09:36:02 crc kubenswrapper[5028]: I1123 09:36:02.024533 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d"} err="failed to get container status \"89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d\": rpc error: code = NotFound desc = could not find container \"89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d\": container with ID starting with 89b71b0d68e3441c4d3986085e4071a2eb8f23ae77c41680e7b442a2191ea47d not found: ID does not exist" Nov 23 09:36:02 crc kubenswrapper[5028]: I1123 09:36:02.024571 5028 scope.go:117] "RemoveContainer" containerID="683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5" Nov 23 09:36:02 crc kubenswrapper[5028]: E1123 09:36:02.025086 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5\": container with ID starting with 683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5 not found: ID does not exist" containerID="683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5" Nov 23 09:36:02 crc kubenswrapper[5028]: I1123 09:36:02.025132 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5"} err="failed to get container status \"683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5\": rpc error: code = NotFound desc = could not find container \"683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5\": container with ID starting with 683346844bdba2ce2ed81fb485072acff79e9bd9fda0c6e3863cd5d37807f3a5 not found: ID does not exist" Nov 23 09:36:02 crc kubenswrapper[5028]: I1123 09:36:02.025163 5028 scope.go:117] "RemoveContainer" containerID="4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375" Nov 23 09:36:02 crc kubenswrapper[5028]: E1123 09:36:02.025554 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375\": container with ID starting with 4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375 not found: ID does not exist" containerID="4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375" Nov 23 09:36:02 crc kubenswrapper[5028]: I1123 09:36:02.025592 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375"} err="failed to get container status \"4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375\": rpc error: code = NotFound desc = could not find container \"4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375\": container with ID starting with 4ed8c0a9de05b70571d50556d45192247917c7d7e9836cb725754813887d7375 not found: ID does not exist" Nov 23 09:36:03 crc kubenswrapper[5028]: I1123 09:36:03.078175 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" path="/var/lib/kubelet/pods/6e43452c-6a32-4965-a0fa-4af15b59d71f/volumes" Nov 23 09:36:56 crc kubenswrapper[5028]: I1123 09:36:56.589868 5028 generic.go:334] "Generic (PLEG): container finished" podID="2a7e62e6-7eca-4f20-821f-fc8c61b58dda" containerID="56234831d0b6ed8f1409616ed7138968c487f5c29c9e96874a6f9492a2233bf4" exitCode=0 Nov 23 09:36:56 crc kubenswrapper[5028]: I1123 09:36:56.589983 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" event={"ID":"2a7e62e6-7eca-4f20-821f-fc8c61b58dda","Type":"ContainerDied","Data":"56234831d0b6ed8f1409616ed7138968c487f5c29c9e96874a6f9492a2233bf4"} Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.285028 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.345256 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceph\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.345728 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-1\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.345793 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7lx9\" (UniqueName: \"kubernetes.io/projected/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-kube-api-access-c7lx9\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.345913 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ssh-key\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.345974 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-0\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.346002 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-telemetry-combined-ca-bundle\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.346080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-2\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.346128 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-inventory\") pod \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\" (UID: \"2a7e62e6-7eca-4f20-821f-fc8c61b58dda\") " Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.353194 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.353404 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-kube-api-access-c7lx9" (OuterVolumeSpecName: "kube-api-access-c7lx9") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "kube-api-access-c7lx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.353721 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceph" (OuterVolumeSpecName: "ceph") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.379563 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.380345 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.380660 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-inventory" (OuterVolumeSpecName: "inventory") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.382960 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.395233 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2a7e62e6-7eca-4f20-821f-fc8c61b58dda" (UID: "2a7e62e6-7eca-4f20-821f-fc8c61b58dda"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.449906 5028 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.449992 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7lx9\" (UniqueName: \"kubernetes.io/projected/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-kube-api-access-c7lx9\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.450010 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.450027 5028 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.450044 5028 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.450059 5028 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.450074 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.450087 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2a7e62e6-7eca-4f20-821f-fc8c61b58dda-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.624456 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" event={"ID":"2a7e62e6-7eca-4f20-821f-fc8c61b58dda","Type":"ContainerDied","Data":"934c3a9fc3f785aa33ff2f26864726c24176b9549e67ef0474eaf8871c4c8303"} Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.624523 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hxr5g" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.624646 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="934c3a9fc3f785aa33ff2f26864726c24176b9549e67ef0474eaf8871c4c8303" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.850465 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-6rrkc"] Nov 23 09:36:58 crc kubenswrapper[5028]: E1123 09:36:58.852188 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7e62e6-7eca-4f20-821f-fc8c61b58dda" containerName="telemetry-openstack-openstack-cell1" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.852235 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7e62e6-7eca-4f20-821f-fc8c61b58dda" containerName="telemetry-openstack-openstack-cell1" Nov 23 09:36:58 crc kubenswrapper[5028]: E1123 09:36:58.852301 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="registry-server" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.852324 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="registry-server" Nov 23 09:36:58 crc kubenswrapper[5028]: E1123 09:36:58.852381 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="extract-utilities" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.852401 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="extract-utilities" Nov 23 09:36:58 crc kubenswrapper[5028]: E1123 09:36:58.852438 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="extract-content" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.852456 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="extract-content" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.852895 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e43452c-6a32-4965-a0fa-4af15b59d71f" containerName="registry-server" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.853022 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7e62e6-7eca-4f20-821f-fc8c61b58dda" containerName="telemetry-openstack-openstack-cell1" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.855163 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.859032 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.861122 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.861343 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.861651 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.861820 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.879620 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-6rrkc"] Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.963366 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gfg\" (UniqueName: \"kubernetes.io/projected/2ec4b9a2-a3fa-4495-9205-e41160705fda-kube-api-access-v6gfg\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.963433 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.963494 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.963533 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.963584 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:58 crc kubenswrapper[5028]: I1123 09:36:58.963757 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.065964 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.066062 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.066215 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gfg\" (UniqueName: \"kubernetes.io/projected/2ec4b9a2-a3fa-4495-9205-e41160705fda-kube-api-access-v6gfg\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.066265 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.066360 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.066644 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.074554 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.074823 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.074996 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ceph\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.076408 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.076488 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.087133 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gfg\" (UniqueName: \"kubernetes.io/projected/2ec4b9a2-a3fa-4495-9205-e41160705fda-kube-api-access-v6gfg\") pod \"neutron-sriov-openstack-openstack-cell1-6rrkc\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.181898 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:36:59 crc kubenswrapper[5028]: I1123 09:36:59.867393 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-6rrkc"] Nov 23 09:37:00 crc kubenswrapper[5028]: I1123 09:37:00.658764 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" event={"ID":"2ec4b9a2-a3fa-4495-9205-e41160705fda","Type":"ContainerStarted","Data":"e788ef37fa38ec49c91140109cea7e5e36bbfd3bc80803dba6254f88b0fe6aed"} Nov 23 09:37:01 crc kubenswrapper[5028]: I1123 09:37:01.676515 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" event={"ID":"2ec4b9a2-a3fa-4495-9205-e41160705fda","Type":"ContainerStarted","Data":"57a66b090556aacd9e47c70812d83b9a4b03bfdbedda6d60e2987160a6dbf599"} Nov 23 09:37:01 crc kubenswrapper[5028]: I1123 09:37:01.711501 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" podStartSLOduration=3.223704555 podStartE2EDuration="3.711478621s" podCreationTimestamp="2025-11-23 09:36:58 +0000 UTC" firstStartedPulling="2025-11-23 09:36:59.873034338 +0000 UTC m=+10003.570439117" lastFinishedPulling="2025-11-23 09:37:00.360808414 +0000 UTC m=+10004.058213183" observedRunningTime="2025-11-23 09:37:01.702532418 +0000 UTC m=+10005.399937197" watchObservedRunningTime="2025-11-23 09:37:01.711478621 +0000 UTC m=+10005.408883400" Nov 23 09:37:46 crc kubenswrapper[5028]: I1123 09:37:46.193902 5028 generic.go:334] "Generic (PLEG): container finished" podID="2ec4b9a2-a3fa-4495-9205-e41160705fda" containerID="57a66b090556aacd9e47c70812d83b9a4b03bfdbedda6d60e2987160a6dbf599" exitCode=0 Nov 23 09:37:46 crc kubenswrapper[5028]: I1123 09:37:46.194002 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" event={"ID":"2ec4b9a2-a3fa-4495-9205-e41160705fda","Type":"ContainerDied","Data":"57a66b090556aacd9e47c70812d83b9a4b03bfdbedda6d60e2987160a6dbf599"} Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.707878 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.717528 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-agent-neutron-config-0\") pod \"2ec4b9a2-a3fa-4495-9205-e41160705fda\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.717617 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ceph\") pod \"2ec4b9a2-a3fa-4495-9205-e41160705fda\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.717942 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-combined-ca-bundle\") pod \"2ec4b9a2-a3fa-4495-9205-e41160705fda\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.718122 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ssh-key\") pod \"2ec4b9a2-a3fa-4495-9205-e41160705fda\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.718244 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-inventory\") pod \"2ec4b9a2-a3fa-4495-9205-e41160705fda\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.718274 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gfg\" (UniqueName: \"kubernetes.io/projected/2ec4b9a2-a3fa-4495-9205-e41160705fda-kube-api-access-v6gfg\") pod \"2ec4b9a2-a3fa-4495-9205-e41160705fda\" (UID: \"2ec4b9a2-a3fa-4495-9205-e41160705fda\") " Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.731184 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "2ec4b9a2-a3fa-4495-9205-e41160705fda" (UID: "2ec4b9a2-a3fa-4495-9205-e41160705fda"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.732850 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec4b9a2-a3fa-4495-9205-e41160705fda-kube-api-access-v6gfg" (OuterVolumeSpecName: "kube-api-access-v6gfg") pod "2ec4b9a2-a3fa-4495-9205-e41160705fda" (UID: "2ec4b9a2-a3fa-4495-9205-e41160705fda"). InnerVolumeSpecName "kube-api-access-v6gfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.735108 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ceph" (OuterVolumeSpecName: "ceph") pod "2ec4b9a2-a3fa-4495-9205-e41160705fda" (UID: "2ec4b9a2-a3fa-4495-9205-e41160705fda"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.758810 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-inventory" (OuterVolumeSpecName: "inventory") pod "2ec4b9a2-a3fa-4495-9205-e41160705fda" (UID: "2ec4b9a2-a3fa-4495-9205-e41160705fda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.766834 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "2ec4b9a2-a3fa-4495-9205-e41160705fda" (UID: "2ec4b9a2-a3fa-4495-9205-e41160705fda"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.774691 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2ec4b9a2-a3fa-4495-9205-e41160705fda" (UID: "2ec4b9a2-a3fa-4495-9205-e41160705fda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.823911 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.824567 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.824584 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.824598 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gfg\" (UniqueName: \"kubernetes.io/projected/2ec4b9a2-a3fa-4495-9205-e41160705fda-kube-api-access-v6gfg\") on node \"crc\" DevicePath \"\"" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.824612 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:37:47 crc kubenswrapper[5028]: I1123 09:37:47.824621 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2ec4b9a2-a3fa-4495-9205-e41160705fda-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.224265 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" event={"ID":"2ec4b9a2-a3fa-4495-9205-e41160705fda","Type":"ContainerDied","Data":"e788ef37fa38ec49c91140109cea7e5e36bbfd3bc80803dba6254f88b0fe6aed"} Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.224316 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e788ef37fa38ec49c91140109cea7e5e36bbfd3bc80803dba6254f88b0fe6aed" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.224456 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-6rrkc" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.335790 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z"] Nov 23 09:37:48 crc kubenswrapper[5028]: E1123 09:37:48.336530 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec4b9a2-a3fa-4495-9205-e41160705fda" containerName="neutron-sriov-openstack-openstack-cell1" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.336565 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec4b9a2-a3fa-4495-9205-e41160705fda" containerName="neutron-sriov-openstack-openstack-cell1" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.337041 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec4b9a2-a3fa-4495-9205-e41160705fda" containerName="neutron-sriov-openstack-openstack-cell1" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.338262 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.340752 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.344230 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.344238 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.344422 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.344673 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.346506 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z"] Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.440448 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.440508 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.440976 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.441040 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.441149 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxmv\" (UniqueName: \"kubernetes.io/projected/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-kube-api-access-llxmv\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.441193 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.543844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.543913 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.544063 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.544098 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.544145 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxmv\" (UniqueName: \"kubernetes.io/projected/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-kube-api-access-llxmv\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.544167 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.550360 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.550388 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.553190 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ceph\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.555647 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.568132 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.572350 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxmv\" (UniqueName: \"kubernetes.io/projected/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-kube-api-access-llxmv\") pod \"neutron-dhcp-openstack-openstack-cell1-x6q9z\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:48 crc kubenswrapper[5028]: I1123 09:37:48.662095 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:37:49 crc kubenswrapper[5028]: I1123 09:37:49.318976 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z"] Nov 23 09:37:50 crc kubenswrapper[5028]: I1123 09:37:50.271510 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" event={"ID":"74331472-e7a6-4f7a-a7e9-32c195f1e4cf","Type":"ContainerStarted","Data":"5be4dcb12024b67305b8f4ab04da808edb8cb7795231b2e4eb8052f81c9df398"} Nov 23 09:37:50 crc kubenswrapper[5028]: I1123 09:37:50.271930 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" event={"ID":"74331472-e7a6-4f7a-a7e9-32c195f1e4cf","Type":"ContainerStarted","Data":"44f4bd740c15a2ac821f9d11643843695a1a2839acee79dac8f91f58de33d895"} Nov 23 09:37:50 crc kubenswrapper[5028]: I1123 09:37:50.331932 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" podStartSLOduration=1.769577008 podStartE2EDuration="2.33189655s" podCreationTimestamp="2025-11-23 09:37:48 +0000 UTC" firstStartedPulling="2025-11-23 09:37:49.326908054 +0000 UTC m=+10053.024312833" lastFinishedPulling="2025-11-23 09:37:49.889227596 +0000 UTC m=+10053.586632375" observedRunningTime="2025-11-23 09:37:50.315713748 +0000 UTC m=+10054.013118557" watchObservedRunningTime="2025-11-23 09:37:50.33189655 +0000 UTC m=+10054.029301349" Nov 23 09:38:00 crc kubenswrapper[5028]: I1123 09:38:00.946633 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:38:00 crc kubenswrapper[5028]: I1123 09:38:00.947799 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:38:30 crc kubenswrapper[5028]: I1123 09:38:30.946180 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:38:30 crc kubenswrapper[5028]: I1123 09:38:30.946770 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:38:52 crc kubenswrapper[5028]: I1123 09:38:52.050737 5028 generic.go:334] "Generic (PLEG): container finished" podID="74331472-e7a6-4f7a-a7e9-32c195f1e4cf" containerID="5be4dcb12024b67305b8f4ab04da808edb8cb7795231b2e4eb8052f81c9df398" exitCode=0 Nov 23 09:38:52 crc kubenswrapper[5028]: I1123 09:38:52.050828 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" event={"ID":"74331472-e7a6-4f7a-a7e9-32c195f1e4cf","Type":"ContainerDied","Data":"5be4dcb12024b67305b8f4ab04da808edb8cb7795231b2e4eb8052f81c9df398"} Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.588554 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.756864 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llxmv\" (UniqueName: \"kubernetes.io/projected/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-kube-api-access-llxmv\") pod \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.757015 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-combined-ca-bundle\") pod \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.757142 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ceph\") pod \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.757198 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-inventory\") pod \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.757220 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ssh-key\") pod \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.757251 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-agent-neutron-config-0\") pod \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\" (UID: \"74331472-e7a6-4f7a-a7e9-32c195f1e4cf\") " Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.762993 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "74331472-e7a6-4f7a-a7e9-32c195f1e4cf" (UID: "74331472-e7a6-4f7a-a7e9-32c195f1e4cf"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.763044 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ceph" (OuterVolumeSpecName: "ceph") pod "74331472-e7a6-4f7a-a7e9-32c195f1e4cf" (UID: "74331472-e7a6-4f7a-a7e9-32c195f1e4cf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.778166 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-kube-api-access-llxmv" (OuterVolumeSpecName: "kube-api-access-llxmv") pod "74331472-e7a6-4f7a-a7e9-32c195f1e4cf" (UID: "74331472-e7a6-4f7a-a7e9-32c195f1e4cf"). InnerVolumeSpecName "kube-api-access-llxmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.786833 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-inventory" (OuterVolumeSpecName: "inventory") pod "74331472-e7a6-4f7a-a7e9-32c195f1e4cf" (UID: "74331472-e7a6-4f7a-a7e9-32c195f1e4cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.788614 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "74331472-e7a6-4f7a-a7e9-32c195f1e4cf" (UID: "74331472-e7a6-4f7a-a7e9-32c195f1e4cf"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.789169 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "74331472-e7a6-4f7a-a7e9-32c195f1e4cf" (UID: "74331472-e7a6-4f7a-a7e9-32c195f1e4cf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.860504 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.860549 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llxmv\" (UniqueName: \"kubernetes.io/projected/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-kube-api-access-llxmv\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.860563 5028 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.860577 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.860590 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:53 crc kubenswrapper[5028]: I1123 09:38:53.860605 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74331472-e7a6-4f7a-a7e9-32c195f1e4cf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:38:54 crc kubenswrapper[5028]: I1123 09:38:54.077105 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" event={"ID":"74331472-e7a6-4f7a-a7e9-32c195f1e4cf","Type":"ContainerDied","Data":"44f4bd740c15a2ac821f9d11643843695a1a2839acee79dac8f91f58de33d895"} Nov 23 09:38:54 crc kubenswrapper[5028]: I1123 09:38:54.077490 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f4bd740c15a2ac821f9d11643843695a1a2839acee79dac8f91f58de33d895" Nov 23 09:38:54 crc kubenswrapper[5028]: I1123 09:38:54.077285 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-x6q9z" Nov 23 09:39:00 crc kubenswrapper[5028]: I1123 09:39:00.947302 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:39:00 crc kubenswrapper[5028]: I1123 09:39:00.948499 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:39:00 crc kubenswrapper[5028]: I1123 09:39:00.948597 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:39:00 crc kubenswrapper[5028]: I1123 09:39:00.950523 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0f1635254f1021f75a78a4d090c542766ebc56f9caf285d4a9adad855a050d2"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:39:00 crc kubenswrapper[5028]: I1123 09:39:00.950642 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://c0f1635254f1021f75a78a4d090c542766ebc56f9caf285d4a9adad855a050d2" gracePeriod=600 Nov 23 09:39:01 crc kubenswrapper[5028]: I1123 09:39:01.163140 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="c0f1635254f1021f75a78a4d090c542766ebc56f9caf285d4a9adad855a050d2" exitCode=0 Nov 23 09:39:01 crc kubenswrapper[5028]: I1123 09:39:01.163189 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"c0f1635254f1021f75a78a4d090c542766ebc56f9caf285d4a9adad855a050d2"} Nov 23 09:39:01 crc kubenswrapper[5028]: I1123 09:39:01.163227 5028 scope.go:117] "RemoveContainer" containerID="a154b873919582e742cd6fd08e0dd8114a6f75cba53eb96763a29f36d5858bbb" Nov 23 09:39:02 crc kubenswrapper[5028]: I1123 09:39:02.204530 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3"} Nov 23 09:39:18 crc kubenswrapper[5028]: I1123 09:39:18.347337 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:39:18 crc kubenswrapper[5028]: I1123 09:39:18.348664 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" containerName="nova-cell0-conductor-conductor" containerID="cri-o://f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" gracePeriod=30 Nov 23 09:39:18 crc kubenswrapper[5028]: I1123 09:39:18.367268 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:39:18 crc kubenswrapper[5028]: I1123 09:39:18.367796 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="1665af0d-f89b-4704-95cc-4e46d2493132" containerName="nova-cell1-conductor-conductor" containerID="cri-o://a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d" gracePeriod=30 Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.312030 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.313181 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-api" containerID="cri-o://3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a" gracePeriod=30 Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.312948 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-log" containerID="cri-o://4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766" gracePeriod=30 Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.352881 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.353706 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" containerName="nova-scheduler-scheduler" containerID="cri-o://dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" gracePeriod=30 Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.374254 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.374646 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-log" containerID="cri-o://343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb" gracePeriod=30 Nov 23 09:39:19 crc kubenswrapper[5028]: I1123 09:39:19.375128 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-metadata" containerID="cri-o://1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39" gracePeriod=30 Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.037909 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.114563 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtgvm\" (UniqueName: \"kubernetes.io/projected/1665af0d-f89b-4704-95cc-4e46d2493132-kube-api-access-vtgvm\") pod \"1665af0d-f89b-4704-95cc-4e46d2493132\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.114685 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-combined-ca-bundle\") pod \"1665af0d-f89b-4704-95cc-4e46d2493132\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.128435 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1665af0d-f89b-4704-95cc-4e46d2493132-kube-api-access-vtgvm" (OuterVolumeSpecName: "kube-api-access-vtgvm") pod "1665af0d-f89b-4704-95cc-4e46d2493132" (UID: "1665af0d-f89b-4704-95cc-4e46d2493132"). InnerVolumeSpecName "kube-api-access-vtgvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.177274 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1665af0d-f89b-4704-95cc-4e46d2493132" (UID: "1665af0d-f89b-4704-95cc-4e46d2493132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.216823 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-config-data\") pod \"1665af0d-f89b-4704-95cc-4e46d2493132\" (UID: \"1665af0d-f89b-4704-95cc-4e46d2493132\") " Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.217484 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtgvm\" (UniqueName: \"kubernetes.io/projected/1665af0d-f89b-4704-95cc-4e46d2493132-kube-api-access-vtgvm\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.217506 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.252578 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-config-data" (OuterVolumeSpecName: "config-data") pod "1665af0d-f89b-4704-95cc-4e46d2493132" (UID: "1665af0d-f89b-4704-95cc-4e46d2493132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.319616 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1665af0d-f89b-4704-95cc-4e46d2493132-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.441401 5028 generic.go:334] "Generic (PLEG): container finished" podID="1665af0d-f89b-4704-95cc-4e46d2493132" containerID="a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d" exitCode=0 Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.441513 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1665af0d-f89b-4704-95cc-4e46d2493132","Type":"ContainerDied","Data":"a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d"} Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.441623 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.442038 5028 scope.go:117] "RemoveContainer" containerID="a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.441997 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1665af0d-f89b-4704-95cc-4e46d2493132","Type":"ContainerDied","Data":"13f66c8f5306a02a498ed958b9608415c122815bada41904200aa1bbdd42b739"} Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.445533 5028 generic.go:334] "Generic (PLEG): container finished" podID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerID="4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766" exitCode=143 Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.445604 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6044e0c2-c84b-4c18-80a5-c0198d883e68","Type":"ContainerDied","Data":"4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766"} Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.447590 5028 generic.go:334] "Generic (PLEG): container finished" podID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerID="343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb" exitCode=143 Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.447622 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dd4b501-4513-45a2-9136-aad11a7150cf","Type":"ContainerDied","Data":"343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb"} Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.648906 5028 scope.go:117] "RemoveContainer" containerID="a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.658102 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.661607 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d\": container with ID starting with a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d not found: ID does not exist" containerID="a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.661654 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d"} err="failed to get container status \"a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d\": rpc error: code = NotFound desc = could not find container \"a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d\": container with ID starting with a6fda14eccd72b4c8dc2159e45f14d2f44a541b293cc49b1ac5040b2c780586d not found: ID does not exist" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.695873 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.711279 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.711970 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74331472-e7a6-4f7a-a7e9-32c195f1e4cf" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.711992 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="74331472-e7a6-4f7a-a7e9-32c195f1e4cf" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.712027 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1665af0d-f89b-4704-95cc-4e46d2493132" containerName="nova-cell1-conductor-conductor" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.712035 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="1665af0d-f89b-4704-95cc-4e46d2493132" containerName="nova-cell1-conductor-conductor" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.712282 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="74331472-e7a6-4f7a-a7e9-32c195f1e4cf" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.712322 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="1665af0d-f89b-4704-95cc-4e46d2493132" containerName="nova-cell1-conductor-conductor" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.713353 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.720489 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.724517 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af is running failed: container process not found" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.732203 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af is running failed: container process not found" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.733255 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af is running failed: container process not found" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 23 09:39:20 crc kubenswrapper[5028]: E1123 09:39:20.733346 5028 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" containerName="nova-cell0-conductor-conductor" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.752469 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.835477 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.836144 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv9pk\" (UniqueName: \"kubernetes.io/projected/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-kube-api-access-qv9pk\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.836230 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.895704 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.938452 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.938595 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv9pk\" (UniqueName: \"kubernetes.io/projected/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-kube-api-access-qv9pk\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.938671 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.946556 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:20 crc kubenswrapper[5028]: I1123 09:39:20.948275 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.040002 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-786fw\" (UniqueName: \"kubernetes.io/projected/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-kube-api-access-786fw\") pod \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.040208 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-config-data\") pod \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.040453 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-combined-ca-bundle\") pod \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\" (UID: \"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d\") " Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.079356 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1665af0d-f89b-4704-95cc-4e46d2493132" path="/var/lib/kubelet/pods/1665af0d-f89b-4704-95cc-4e46d2493132/volumes" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.463388 5028 generic.go:334] "Generic (PLEG): container finished" podID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" exitCode=0 Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.463527 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.659298 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv9pk\" (UniqueName: \"kubernetes.io/projected/1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d-kube-api-access-qv9pk\") pod \"nova-cell1-conductor-0\" (UID: \"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d\") " pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.674374 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-kube-api-access-786fw" (OuterVolumeSpecName: "kube-api-access-786fw") pod "bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" (UID: "bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d"). InnerVolumeSpecName "kube-api-access-786fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.698436 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" (UID: "bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.728210 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-config-data" (OuterVolumeSpecName: "config-data") pod "bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" (UID: "bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.760549 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-786fw\" (UniqueName: \"kubernetes.io/projected/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-kube-api-access-786fw\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.760593 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.760606 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.820519 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d","Type":"ContainerDied","Data":"f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af"} Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.820585 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d","Type":"ContainerDied","Data":"7aa50e6be0990133f59ddf70976688178d5ed5d8f7631fd2021fb67b450e9a04"} Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.820614 5028 scope.go:117] "RemoveContainer" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.852028 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.871344 5028 scope.go:117] "RemoveContainer" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" Nov 23 09:39:21 crc kubenswrapper[5028]: E1123 09:39:21.872254 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af\": container with ID starting with f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af not found: ID does not exist" containerID="f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.872326 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af"} err="failed to get container status \"f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af\": rpc error: code = NotFound desc = could not find container \"f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af\": container with ID starting with f376717c0cb9b0a600a9628d3bdf48c5822d942ca850507c47eaf36cadbd17af not found: ID does not exist" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.885762 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.902811 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:39:21 crc kubenswrapper[5028]: E1123 09:39:21.903533 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" containerName="nova-cell0-conductor-conductor" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.903554 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" containerName="nova-cell0-conductor-conductor" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.903811 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" containerName="nova-cell0-conductor-conductor" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.904765 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.909392 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.922646 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:39:21 crc kubenswrapper[5028]: I1123 09:39:21.950335 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.071674 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lwjk\" (UniqueName: \"kubernetes.io/projected/eef17367-78dc-4965-b642-ce9491d8c0af-kube-api-access-2lwjk\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.071837 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef17367-78dc-4965-b642-ce9491d8c0af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.071987 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef17367-78dc-4965-b642-ce9491d8c0af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.175651 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lwjk\" (UniqueName: \"kubernetes.io/projected/eef17367-78dc-4965-b642-ce9491d8c0af-kube-api-access-2lwjk\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.176067 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef17367-78dc-4965-b642-ce9491d8c0af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.176865 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef17367-78dc-4965-b642-ce9491d8c0af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.184409 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef17367-78dc-4965-b642-ce9491d8c0af-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.192232 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef17367-78dc-4965-b642-ce9491d8c0af-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.195308 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lwjk\" (UniqueName: \"kubernetes.io/projected/eef17367-78dc-4965-b642-ce9491d8c0af-kube-api-access-2lwjk\") pod \"nova-cell0-conductor-0\" (UID: \"eef17367-78dc-4965-b642-ce9491d8c0af\") " pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.251715 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.443857 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.480985 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d","Type":"ContainerStarted","Data":"1052ac5aeafe929b0b1203e49e2aafaee0217197eec1095b576de5c64ca1bcb0"} Nov 23 09:39:22 crc kubenswrapper[5028]: I1123 09:39:22.751435 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 23 09:39:22 crc kubenswrapper[5028]: E1123 09:39:22.881110 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 09:39:22 crc kubenswrapper[5028]: E1123 09:39:22.887737 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 09:39:22 crc kubenswrapper[5028]: E1123 09:39:22.890493 5028 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 23 09:39:22 crc kubenswrapper[5028]: E1123 09:39:22.890575 5028 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" containerName="nova-scheduler-scheduler" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.068640 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d" path="/var/lib/kubelet/pods/bbfb8a6b-2b6b-45d8-a1ef-aba840ffc02d/volumes" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.126306 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.221760 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-combined-ca-bundle\") pod \"7dd4b501-4513-45a2-9136-aad11a7150cf\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.222008 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dd4b501-4513-45a2-9136-aad11a7150cf-logs\") pod \"7dd4b501-4513-45a2-9136-aad11a7150cf\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.222116 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-config-data\") pod \"7dd4b501-4513-45a2-9136-aad11a7150cf\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.222189 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5vw\" (UniqueName: \"kubernetes.io/projected/7dd4b501-4513-45a2-9136-aad11a7150cf-kube-api-access-hb5vw\") pod \"7dd4b501-4513-45a2-9136-aad11a7150cf\" (UID: \"7dd4b501-4513-45a2-9136-aad11a7150cf\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.225213 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd4b501-4513-45a2-9136-aad11a7150cf-logs" (OuterVolumeSpecName: "logs") pod "7dd4b501-4513-45a2-9136-aad11a7150cf" (UID: "7dd4b501-4513-45a2-9136-aad11a7150cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.240948 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd4b501-4513-45a2-9136-aad11a7150cf-kube-api-access-hb5vw" (OuterVolumeSpecName: "kube-api-access-hb5vw") pod "7dd4b501-4513-45a2-9136-aad11a7150cf" (UID: "7dd4b501-4513-45a2-9136-aad11a7150cf"). InnerVolumeSpecName "kube-api-access-hb5vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.287093 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-config-data" (OuterVolumeSpecName: "config-data") pod "7dd4b501-4513-45a2-9136-aad11a7150cf" (UID: "7dd4b501-4513-45a2-9136-aad11a7150cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.308768 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dd4b501-4513-45a2-9136-aad11a7150cf" (UID: "7dd4b501-4513-45a2-9136-aad11a7150cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.325827 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.325886 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7dd4b501-4513-45a2-9136-aad11a7150cf-logs\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.325901 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd4b501-4513-45a2-9136-aad11a7150cf-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.325915 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5vw\" (UniqueName: \"kubernetes.io/projected/7dd4b501-4513-45a2-9136-aad11a7150cf-kube-api-access-hb5vw\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.356355 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.509277 5028 generic.go:334] "Generic (PLEG): container finished" podID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerID="1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39" exitCode=0 Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.509362 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dd4b501-4513-45a2-9136-aad11a7150cf","Type":"ContainerDied","Data":"1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.509401 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7dd4b501-4513-45a2-9136-aad11a7150cf","Type":"ContainerDied","Data":"70be0ceb8a7781c71169c30d1e287398781f0846502aa814bdc1e4b2749a3119"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.509423 5028 scope.go:117] "RemoveContainer" containerID="1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.509596 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.514650 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d","Type":"ContainerStarted","Data":"376d303bb680093693b5802d1734a3339e52e4aecddf437f2989aece72c24dfc"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.515116 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.517462 5028 generic.go:334] "Generic (PLEG): container finished" podID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerID="3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a" exitCode=0 Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.517563 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.517542 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6044e0c2-c84b-4c18-80a5-c0198d883e68","Type":"ContainerDied","Data":"3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.517692 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6044e0c2-c84b-4c18-80a5-c0198d883e68","Type":"ContainerDied","Data":"67f7f6555fa8ce0a2240ee9fa91195f50a04a409c3fe35ec40704d9bd3fa2363"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.518883 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eef17367-78dc-4965-b642-ce9491d8c0af","Type":"ContainerStarted","Data":"85ff4cb42725f63364282a826ae72cced8d26a4fd4568583064cbfda1b100510"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.518905 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"eef17367-78dc-4965-b642-ce9491d8c0af","Type":"ContainerStarted","Data":"591bee0fad634fd3e25a1ed73d14add78a15e198aef54b116f5807d59a069567"} Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.519602 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.529931 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-combined-ca-bundle\") pod \"6044e0c2-c84b-4c18-80a5-c0198d883e68\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.530016 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6044e0c2-c84b-4c18-80a5-c0198d883e68-logs\") pod \"6044e0c2-c84b-4c18-80a5-c0198d883e68\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.530088 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc528\" (UniqueName: \"kubernetes.io/projected/6044e0c2-c84b-4c18-80a5-c0198d883e68-kube-api-access-fc528\") pod \"6044e0c2-c84b-4c18-80a5-c0198d883e68\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.530261 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-config-data\") pod \"6044e0c2-c84b-4c18-80a5-c0198d883e68\" (UID: \"6044e0c2-c84b-4c18-80a5-c0198d883e68\") " Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.530611 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6044e0c2-c84b-4c18-80a5-c0198d883e68-logs" (OuterVolumeSpecName: "logs") pod "6044e0c2-c84b-4c18-80a5-c0198d883e68" (UID: "6044e0c2-c84b-4c18-80a5-c0198d883e68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.534624 5028 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6044e0c2-c84b-4c18-80a5-c0198d883e68-logs\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.556020 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6044e0c2-c84b-4c18-80a5-c0198d883e68-kube-api-access-fc528" (OuterVolumeSpecName: "kube-api-access-fc528") pod "6044e0c2-c84b-4c18-80a5-c0198d883e68" (UID: "6044e0c2-c84b-4c18-80a5-c0198d883e68"). InnerVolumeSpecName "kube-api-access-fc528". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.556704 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.556667234 podStartE2EDuration="3.556667234s" podCreationTimestamp="2025-11-23 09:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:39:23.542134113 +0000 UTC m=+10147.239538892" watchObservedRunningTime="2025-11-23 09:39:23.556667234 +0000 UTC m=+10147.254072013" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.582933 5028 scope.go:117] "RemoveContainer" containerID="343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.583684 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6044e0c2-c84b-4c18-80a5-c0198d883e68" (UID: "6044e0c2-c84b-4c18-80a5-c0198d883e68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.589720 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.589691296 podStartE2EDuration="2.589691296s" podCreationTimestamp="2025-11-23 09:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:39:23.561240438 +0000 UTC m=+10147.258645217" watchObservedRunningTime="2025-11-23 09:39:23.589691296 +0000 UTC m=+10147.287096075" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.629889 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.631087 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-config-data" (OuterVolumeSpecName: "config-data") pod "6044e0c2-c84b-4c18-80a5-c0198d883e68" (UID: "6044e0c2-c84b-4c18-80a5-c0198d883e68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.638850 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.638882 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc528\" (UniqueName: \"kubernetes.io/projected/6044e0c2-c84b-4c18-80a5-c0198d883e68-kube-api-access-fc528\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.638892 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6044e0c2-c84b-4c18-80a5-c0198d883e68-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.648578 5028 scope.go:117] "RemoveContainer" containerID="1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.650691 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.652690 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39\": container with ID starting with 1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39 not found: ID does not exist" containerID="1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.652737 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39"} err="failed to get container status \"1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39\": rpc error: code = NotFound desc = could not find container \"1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39\": container with ID starting with 1f10bf63950913bc2a9a5c2797813d03d2e386f15de4fbe46053099585534c39 not found: ID does not exist" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.652790 5028 scope.go:117] "RemoveContainer" containerID="343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb" Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.654729 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb\": container with ID starting with 343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb not found: ID does not exist" containerID="343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.654798 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb"} err="failed to get container status \"343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb\": rpc error: code = NotFound desc = could not find container \"343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb\": container with ID starting with 343c8762093abe1e9dda0cc3a2e52f3885aa01be58d263e481ac01ef7d97c0cb not found: ID does not exist" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.654837 5028 scope.go:117] "RemoveContainer" containerID="3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.663004 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.663784 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-log" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.663807 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-log" Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.663839 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-metadata" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.663846 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-metadata" Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.663867 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-log" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.663876 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-log" Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.663915 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-api" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.663922 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-api" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.664296 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-log" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.664345 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-log" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.664362 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-metadata" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.664382 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" containerName="nova-api-api" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.667321 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.670897 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.676938 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.685795 5028 scope.go:117] "RemoveContainer" containerID="4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.725157 5028 scope.go:117] "RemoveContainer" containerID="3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a" Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.725793 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a\": container with ID starting with 3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a not found: ID does not exist" containerID="3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.725862 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a"} err="failed to get container status \"3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a\": rpc error: code = NotFound desc = could not find container \"3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a\": container with ID starting with 3a9413ea104c911bb0b4140eff206c7795dee9f6cf4c4201d9fa8ff3f20cf95a not found: ID does not exist" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.725910 5028 scope.go:117] "RemoveContainer" containerID="4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766" Nov 23 09:39:23 crc kubenswrapper[5028]: E1123 09:39:23.726467 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766\": container with ID starting with 4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766 not found: ID does not exist" containerID="4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.726531 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766"} err="failed to get container status \"4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766\": rpc error: code = NotFound desc = could not find container \"4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766\": container with ID starting with 4e6372a330d4271db6b2908bacdfc238ebbadf8c582e22adf9019f45965d8766 not found: ID does not exist" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.848281 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92a53659-03ef-4bd3-940f-cf8528b8012d-logs\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.848463 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvnn\" (UniqueName: \"kubernetes.io/projected/92a53659-03ef-4bd3-940f-cf8528b8012d-kube-api-access-xhvnn\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.848619 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a53659-03ef-4bd3-940f-cf8528b8012d-config-data\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.848662 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a53659-03ef-4bd3-940f-cf8528b8012d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.880075 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.898039 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.914039 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.917060 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.926454 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.950881 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92a53659-03ef-4bd3-940f-cf8528b8012d-logs\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.950980 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvnn\" (UniqueName: \"kubernetes.io/projected/92a53659-03ef-4bd3-940f-cf8528b8012d-kube-api-access-xhvnn\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.951063 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a53659-03ef-4bd3-940f-cf8528b8012d-config-data\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.951087 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a53659-03ef-4bd3-940f-cf8528b8012d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.952566 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92a53659-03ef-4bd3-940f-cf8528b8012d-logs\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.956135 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.959383 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92a53659-03ef-4bd3-940f-cf8528b8012d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.960938 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92a53659-03ef-4bd3-940f-cf8528b8012d-config-data\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.973310 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvnn\" (UniqueName: \"kubernetes.io/projected/92a53659-03ef-4bd3-940f-cf8528b8012d-kube-api-access-xhvnn\") pod \"nova-metadata-0\" (UID: \"92a53659-03ef-4bd3-940f-cf8528b8012d\") " pod="openstack/nova-metadata-0" Nov 23 09:39:23 crc kubenswrapper[5028]: I1123 09:39:23.992422 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.056702 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb306-8a5c-4842-9d43-126018e87996-logs\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.057064 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6gt2\" (UniqueName: \"kubernetes.io/projected/edffb306-8a5c-4842-9d43-126018e87996-kube-api-access-x6gt2\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.057218 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb306-8a5c-4842-9d43-126018e87996-config-data\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.057382 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb306-8a5c-4842-9d43-126018e87996-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.159410 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb306-8a5c-4842-9d43-126018e87996-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.159837 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb306-8a5c-4842-9d43-126018e87996-logs\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.159883 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6gt2\" (UniqueName: \"kubernetes.io/projected/edffb306-8a5c-4842-9d43-126018e87996-kube-api-access-x6gt2\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.159940 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb306-8a5c-4842-9d43-126018e87996-config-data\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.162715 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edffb306-8a5c-4842-9d43-126018e87996-logs\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.171081 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edffb306-8a5c-4842-9d43-126018e87996-config-data\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.185753 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edffb306-8a5c-4842-9d43-126018e87996-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.189646 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6gt2\" (UniqueName: \"kubernetes.io/projected/edffb306-8a5c-4842-9d43-126018e87996-kube-api-access-x6gt2\") pod \"nova-api-0\" (UID: \"edffb306-8a5c-4842-9d43-126018e87996\") " pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.247012 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.558167 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 23 09:39:24 crc kubenswrapper[5028]: I1123 09:39:24.808548 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 23 09:39:24 crc kubenswrapper[5028]: W1123 09:39:24.821005 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedffb306_8a5c_4842_9d43_126018e87996.slice/crio-e7f98e98af080405fdc17b0aacbec9a2feeafe9f093e0d58ca4a3c666e189478 WatchSource:0}: Error finding container e7f98e98af080405fdc17b0aacbec9a2feeafe9f093e0d58ca4a3c666e189478: Status 404 returned error can't find the container with id e7f98e98af080405fdc17b0aacbec9a2feeafe9f093e0d58ca4a3c666e189478 Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.072688 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6044e0c2-c84b-4c18-80a5-c0198d883e68" path="/var/lib/kubelet/pods/6044e0c2-c84b-4c18-80a5-c0198d883e68/volumes" Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.074418 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" path="/var/lib/kubelet/pods/7dd4b501-4513-45a2-9136-aad11a7150cf/volumes" Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.557330 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"edffb306-8a5c-4842-9d43-126018e87996","Type":"ContainerStarted","Data":"ba3ddfd2d6a33fd532bcebe21bdc6e457d84ca8a63d417b379a52b00030ba5d4"} Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.559382 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"edffb306-8a5c-4842-9d43-126018e87996","Type":"ContainerStarted","Data":"9d4fc0d2e06583ffe958349f8eb56d6d1ac5c17c0a8e696d3083cce8c0c1bacb"} Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.560067 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"edffb306-8a5c-4842-9d43-126018e87996","Type":"ContainerStarted","Data":"e7f98e98af080405fdc17b0aacbec9a2feeafe9f093e0d58ca4a3c666e189478"} Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.560911 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92a53659-03ef-4bd3-940f-cf8528b8012d","Type":"ContainerStarted","Data":"c4011c739384fc578d20ddaed97558a1d8f967e37c813240eca68a1ae2c4ee81"} Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.560942 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92a53659-03ef-4bd3-940f-cf8528b8012d","Type":"ContainerStarted","Data":"5c4170eec95f6c48bb7d48e58af74df12a43d3c8535574da11fcef5c26c1baef"} Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.560965 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"92a53659-03ef-4bd3-940f-cf8528b8012d","Type":"ContainerStarted","Data":"92063e84dc9b26a91a4a6a9fd7b858991bca4e2291d87f9bfb1cad261f1d653c"} Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.586521 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.586492209 podStartE2EDuration="2.586492209s" podCreationTimestamp="2025-11-23 09:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:39:25.578210183 +0000 UTC m=+10149.275614962" watchObservedRunningTime="2025-11-23 09:39:25.586492209 +0000 UTC m=+10149.283896988" Nov 23 09:39:25 crc kubenswrapper[5028]: I1123 09:39:25.612824 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.612798984 podStartE2EDuration="2.612798984s" podCreationTimestamp="2025-11-23 09:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:39:25.599033861 +0000 UTC m=+10149.296438670" watchObservedRunningTime="2025-11-23 09:39:25.612798984 +0000 UTC m=+10149.310203763" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.505637 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.588614 5028 generic.go:334] "Generic (PLEG): container finished" podID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" exitCode=0 Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.588828 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.588894 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5","Type":"ContainerDied","Data":"dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb"} Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.588943 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5","Type":"ContainerDied","Data":"24066b8ae1c1df9ab4bf1497135792cdb822dd93f9d0ebc32025870f0f2235ca"} Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.588997 5028 scope.go:117] "RemoveContainer" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.626305 5028 scope.go:117] "RemoveContainer" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" Nov 23 09:39:27 crc kubenswrapper[5028]: E1123 09:39:27.626908 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb\": container with ID starting with dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb not found: ID does not exist" containerID="dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.626992 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb"} err="failed to get container status \"dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb\": rpc error: code = NotFound desc = could not find container \"dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb\": container with ID starting with dd14c153658581cc786820c92b5bf15ed4b5dc879db0c96663ab7c1e2b6eabcb not found: ID does not exist" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.667434 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w48c\" (UniqueName: \"kubernetes.io/projected/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-kube-api-access-7w48c\") pod \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.668188 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-combined-ca-bundle\") pod \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.668514 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-config-data\") pod \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\" (UID: \"ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5\") " Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.675833 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-kube-api-access-7w48c" (OuterVolumeSpecName: "kube-api-access-7w48c") pod "ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" (UID: "ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5"). InnerVolumeSpecName "kube-api-access-7w48c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.702558 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" (UID: "ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.706317 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-config-data" (OuterVolumeSpecName: "config-data") pod "ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" (UID: "ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.775075 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w48c\" (UniqueName: \"kubernetes.io/projected/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-kube-api-access-7w48c\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.775126 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.775144 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.807446 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.95:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.807457 5028 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7dd4b501-4513-45a2-9136-aad11a7150cf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.95:8775/\": dial tcp 10.217.1.95:8775: i/o timeout (Client.Timeout exceeded while awaiting headers)" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.942579 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.971252 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.989044 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:39:27 crc kubenswrapper[5028]: E1123 09:39:27.989997 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" containerName="nova-scheduler-scheduler" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.990033 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" containerName="nova-scheduler-scheduler" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.990521 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" containerName="nova-scheduler-scheduler" Nov 23 09:39:27 crc kubenswrapper[5028]: I1123 09:39:27.992156 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.001813 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.006300 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.085725 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfc98ff-7be8-4623-8b5c-357b97b763cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.085795 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfc98ff-7be8-4623-8b5c-357b97b763cf-config-data\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.086202 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshzk\" (UniqueName: \"kubernetes.io/projected/6bfc98ff-7be8-4623-8b5c-357b97b763cf-kube-api-access-pshzk\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.190040 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshzk\" (UniqueName: \"kubernetes.io/projected/6bfc98ff-7be8-4623-8b5c-357b97b763cf-kube-api-access-pshzk\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.190408 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfc98ff-7be8-4623-8b5c-357b97b763cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.190450 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfc98ff-7be8-4623-8b5c-357b97b763cf-config-data\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.197092 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bfc98ff-7be8-4623-8b5c-357b97b763cf-config-data\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.198626 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bfc98ff-7be8-4623-8b5c-357b97b763cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.214774 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshzk\" (UniqueName: \"kubernetes.io/projected/6bfc98ff-7be8-4623-8b5c-357b97b763cf-kube-api-access-pshzk\") pod \"nova-scheduler-0\" (UID: \"6bfc98ff-7be8-4623-8b5c-357b97b763cf\") " pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.324220 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.858641 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.992938 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 09:39:28 crc kubenswrapper[5028]: I1123 09:39:28.993124 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 23 09:39:29 crc kubenswrapper[5028]: I1123 09:39:29.074337 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5" path="/var/lib/kubelet/pods/ccfc21bc-ad1b-4bb2-b24f-623b09d42cc5/volumes" Nov 23 09:39:29 crc kubenswrapper[5028]: I1123 09:39:29.634893 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bfc98ff-7be8-4623-8b5c-357b97b763cf","Type":"ContainerStarted","Data":"81b1d7d2bc9d8fe242cb561419aad732c6ad242f7f45de789fceec999b1ae14d"} Nov 23 09:39:29 crc kubenswrapper[5028]: I1123 09:39:29.635500 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6bfc98ff-7be8-4623-8b5c-357b97b763cf","Type":"ContainerStarted","Data":"6e1edc872bb48bbd114fec84d91265087c8dd02aef75dfb57fb2abbf6d057551"} Nov 23 09:39:29 crc kubenswrapper[5028]: I1123 09:39:29.666360 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.666327373 podStartE2EDuration="2.666327373s" podCreationTimestamp="2025-11-23 09:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:39:29.653900854 +0000 UTC m=+10153.351305633" watchObservedRunningTime="2025-11-23 09:39:29.666327373 +0000 UTC m=+10153.363732152" Nov 23 09:39:32 crc kubenswrapper[5028]: I1123 09:39:32.017288 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 23 09:39:32 crc kubenswrapper[5028]: I1123 09:39:32.290841 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 23 09:39:33 crc kubenswrapper[5028]: I1123 09:39:33.324997 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 23 09:39:33 crc kubenswrapper[5028]: I1123 09:39:33.993594 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 09:39:33 crc kubenswrapper[5028]: I1123 09:39:33.994384 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 23 09:39:34 crc kubenswrapper[5028]: I1123 09:39:34.248743 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 09:39:34 crc kubenswrapper[5028]: I1123 09:39:34.248817 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 23 09:39:35 crc kubenswrapper[5028]: I1123 09:39:35.077340 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92a53659-03ef-4bd3-940f-cf8528b8012d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.200:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:39:35 crc kubenswrapper[5028]: I1123 09:39:35.077336 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="92a53659-03ef-4bd3-940f-cf8528b8012d" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.200:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:39:35 crc kubenswrapper[5028]: I1123 09:39:35.331355 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="edffb306-8a5c-4842-9d43-126018e87996" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:39:35 crc kubenswrapper[5028]: I1123 09:39:35.331622 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="edffb306-8a5c-4842-9d43-126018e87996" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 23 09:39:38 crc kubenswrapper[5028]: I1123 09:39:38.325232 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 23 09:39:38 crc kubenswrapper[5028]: I1123 09:39:38.370476 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 23 09:39:39 crc kubenswrapper[5028]: I1123 09:39:39.229914 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 23 09:39:43 crc kubenswrapper[5028]: I1123 09:39:43.996393 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 09:39:43 crc kubenswrapper[5028]: I1123 09:39:43.997418 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 23 09:39:43 crc kubenswrapper[5028]: I1123 09:39:43.999775 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 09:39:44 crc kubenswrapper[5028]: I1123 09:39:44.000942 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 23 09:39:44 crc kubenswrapper[5028]: I1123 09:39:44.253797 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 09:39:44 crc kubenswrapper[5028]: I1123 09:39:44.254312 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 09:39:44 crc kubenswrapper[5028]: I1123 09:39:44.258481 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 09:39:44 crc kubenswrapper[5028]: I1123 09:39:44.259115 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 23 09:39:45 crc kubenswrapper[5028]: I1123 09:39:45.267746 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 23 09:39:45 crc kubenswrapper[5028]: I1123 09:39:45.280562 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.371494 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj"] Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.375297 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.378088 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.378589 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.379097 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.379212 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.379223 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-jd84g" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.380289 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.385324 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.397191 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj"] Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.400819 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.400918 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401095 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401171 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401208 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401253 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401271 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401312 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401350 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401456 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.401535 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzl98\" (UniqueName: \"kubernetes.io/projected/05b27132-4980-4bfa-97b0-463d53cd4486-kube-api-access-wzl98\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.504320 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505022 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505205 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505335 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505451 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505531 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505647 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505799 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.505999 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.506144 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzl98\" (UniqueName: \"kubernetes.io/projected/05b27132-4980-4bfa-97b0-463d53cd4486-kube-api-access-wzl98\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.506294 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.506649 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.515119 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.515824 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.516040 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.516171 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ceph\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.516227 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.516537 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.517922 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.518615 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.518849 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.532389 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzl98\" (UniqueName: \"kubernetes.io/projected/05b27132-4980-4bfa-97b0-463d53cd4486-kube-api-access-wzl98\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:46 crc kubenswrapper[5028]: I1123 09:39:46.707802 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:39:47 crc kubenswrapper[5028]: I1123 09:39:47.337439 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj"] Nov 23 09:39:47 crc kubenswrapper[5028]: W1123 09:39:47.346588 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05b27132_4980_4bfa_97b0_463d53cd4486.slice/crio-fcb41f8e81e892c730e02254d145a133c2953f4516cfe2b4d8bc64d10a084984 WatchSource:0}: Error finding container fcb41f8e81e892c730e02254d145a133c2953f4516cfe2b4d8bc64d10a084984: Status 404 returned error can't find the container with id fcb41f8e81e892c730e02254d145a133c2953f4516cfe2b4d8bc64d10a084984 Nov 23 09:39:48 crc kubenswrapper[5028]: I1123 09:39:48.309938 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" event={"ID":"05b27132-4980-4bfa-97b0-463d53cd4486","Type":"ContainerStarted","Data":"2e82710877c61cb96523d083e22aa92e507ecd439857299b15fbf3fe06e6e1c5"} Nov 23 09:39:48 crc kubenswrapper[5028]: I1123 09:39:48.310562 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" event={"ID":"05b27132-4980-4bfa-97b0-463d53cd4486","Type":"ContainerStarted","Data":"fcb41f8e81e892c730e02254d145a133c2953f4516cfe2b4d8bc64d10a084984"} Nov 23 09:39:48 crc kubenswrapper[5028]: I1123 09:39:48.336797 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" podStartSLOduration=1.801619919 podStartE2EDuration="2.336766395s" podCreationTimestamp="2025-11-23 09:39:46 +0000 UTC" firstStartedPulling="2025-11-23 09:39:47.352111565 +0000 UTC m=+10171.049516354" lastFinishedPulling="2025-11-23 09:39:47.887258041 +0000 UTC m=+10171.584662830" observedRunningTime="2025-11-23 09:39:48.335161816 +0000 UTC m=+10172.032566595" watchObservedRunningTime="2025-11-23 09:39:48.336766395 +0000 UTC m=+10172.034171174" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.445858 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kpxr4"] Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.450617 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.488916 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kpxr4"] Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.573565 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-utilities\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.573789 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45cln\" (UniqueName: \"kubernetes.io/projected/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-kube-api-access-45cln\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.574005 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-catalog-content\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.676873 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45cln\" (UniqueName: \"kubernetes.io/projected/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-kube-api-access-45cln\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.677353 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-catalog-content\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.677549 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-utilities\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.677985 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-catalog-content\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.678041 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-utilities\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.713653 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45cln\" (UniqueName: \"kubernetes.io/projected/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-kube-api-access-45cln\") pod \"redhat-operators-kpxr4\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:09 crc kubenswrapper[5028]: I1123 09:40:09.792731 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:10 crc kubenswrapper[5028]: I1123 09:40:10.319969 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kpxr4"] Nov 23 09:40:10 crc kubenswrapper[5028]: I1123 09:40:10.629064 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerDied","Data":"fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6"} Nov 23 09:40:10 crc kubenswrapper[5028]: I1123 09:40:10.629015 5028 generic.go:334] "Generic (PLEG): container finished" podID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerID="fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6" exitCode=0 Nov 23 09:40:10 crc kubenswrapper[5028]: I1123 09:40:10.629560 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerStarted","Data":"933cbe21d45bd5e5407ca4481c6bec6c04220aafddd06809a96fbc40a586ea7c"} Nov 23 09:40:11 crc kubenswrapper[5028]: I1123 09:40:11.643488 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerStarted","Data":"84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72"} Nov 23 09:40:13 crc kubenswrapper[5028]: I1123 09:40:13.674160 5028 generic.go:334] "Generic (PLEG): container finished" podID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerID="84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72" exitCode=0 Nov 23 09:40:13 crc kubenswrapper[5028]: I1123 09:40:13.675147 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerDied","Data":"84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72"} Nov 23 09:40:14 crc kubenswrapper[5028]: I1123 09:40:14.725592 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerStarted","Data":"800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3"} Nov 23 09:40:14 crc kubenswrapper[5028]: I1123 09:40:14.761435 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kpxr4" podStartSLOduration=2.291806306 podStartE2EDuration="5.761398096s" podCreationTimestamp="2025-11-23 09:40:09 +0000 UTC" firstStartedPulling="2025-11-23 09:40:10.63158894 +0000 UTC m=+10194.328993719" lastFinishedPulling="2025-11-23 09:40:14.10118071 +0000 UTC m=+10197.798585509" observedRunningTime="2025-11-23 09:40:14.745288215 +0000 UTC m=+10198.442693004" watchObservedRunningTime="2025-11-23 09:40:14.761398096 +0000 UTC m=+10198.458802885" Nov 23 09:40:19 crc kubenswrapper[5028]: I1123 09:40:19.793818 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:19 crc kubenswrapper[5028]: I1123 09:40:19.795125 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:20 crc kubenswrapper[5028]: I1123 09:40:20.850068 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kpxr4" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="registry-server" probeResult="failure" output=< Nov 23 09:40:20 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:40:20 crc kubenswrapper[5028]: > Nov 23 09:40:29 crc kubenswrapper[5028]: I1123 09:40:29.897387 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:29 crc kubenswrapper[5028]: I1123 09:40:29.979653 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:30 crc kubenswrapper[5028]: I1123 09:40:30.161113 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kpxr4"] Nov 23 09:40:30 crc kubenswrapper[5028]: I1123 09:40:30.961646 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kpxr4" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="registry-server" containerID="cri-o://800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3" gracePeriod=2 Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.784734 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.849695 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-utilities\") pod \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.849759 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-catalog-content\") pod \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.850118 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45cln\" (UniqueName: \"kubernetes.io/projected/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-kube-api-access-45cln\") pod \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\" (UID: \"89d270b8-1cad-47e1-8dea-9ae0f41e3bad\") " Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.851167 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-utilities" (OuterVolumeSpecName: "utilities") pod "89d270b8-1cad-47e1-8dea-9ae0f41e3bad" (UID: "89d270b8-1cad-47e1-8dea-9ae0f41e3bad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.867325 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-kube-api-access-45cln" (OuterVolumeSpecName: "kube-api-access-45cln") pod "89d270b8-1cad-47e1-8dea-9ae0f41e3bad" (UID: "89d270b8-1cad-47e1-8dea-9ae0f41e3bad"). InnerVolumeSpecName "kube-api-access-45cln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.953117 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45cln\" (UniqueName: \"kubernetes.io/projected/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-kube-api-access-45cln\") on node \"crc\" DevicePath \"\"" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.953181 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.953787 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89d270b8-1cad-47e1-8dea-9ae0f41e3bad" (UID: "89d270b8-1cad-47e1-8dea-9ae0f41e3bad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.980178 5028 generic.go:334] "Generic (PLEG): container finished" podID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerID="800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3" exitCode=0 Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.980242 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerDied","Data":"800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3"} Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.980283 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpxr4" event={"ID":"89d270b8-1cad-47e1-8dea-9ae0f41e3bad","Type":"ContainerDied","Data":"933cbe21d45bd5e5407ca4481c6bec6c04220aafddd06809a96fbc40a586ea7c"} Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.980309 5028 scope.go:117] "RemoveContainer" containerID="800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3" Nov 23 09:40:31 crc kubenswrapper[5028]: I1123 09:40:31.980559 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpxr4" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.011350 5028 scope.go:117] "RemoveContainer" containerID="84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.031095 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kpxr4"] Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.047094 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kpxr4"] Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.056521 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d270b8-1cad-47e1-8dea-9ae0f41e3bad-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.057488 5028 scope.go:117] "RemoveContainer" containerID="fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.102386 5028 scope.go:117] "RemoveContainer" containerID="800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3" Nov 23 09:40:32 crc kubenswrapper[5028]: E1123 09:40:32.103053 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3\": container with ID starting with 800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3 not found: ID does not exist" containerID="800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.103092 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3"} err="failed to get container status \"800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3\": rpc error: code = NotFound desc = could not find container \"800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3\": container with ID starting with 800c4d265c4acc4e507c82161d0ae9881e3dc431eca9851e8a01f6fdafbac3a3 not found: ID does not exist" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.103121 5028 scope.go:117] "RemoveContainer" containerID="84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72" Nov 23 09:40:32 crc kubenswrapper[5028]: E1123 09:40:32.104270 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72\": container with ID starting with 84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72 not found: ID does not exist" containerID="84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.104380 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72"} err="failed to get container status \"84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72\": rpc error: code = NotFound desc = could not find container \"84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72\": container with ID starting with 84a4c57703d577272b1fdb9215fcc81ce73a929c15aaf7e45894a82425404b72 not found: ID does not exist" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.104432 5028 scope.go:117] "RemoveContainer" containerID="fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6" Nov 23 09:40:32 crc kubenswrapper[5028]: E1123 09:40:32.104989 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6\": container with ID starting with fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6 not found: ID does not exist" containerID="fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6" Nov 23 09:40:32 crc kubenswrapper[5028]: I1123 09:40:32.105024 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6"} err="failed to get container status \"fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6\": rpc error: code = NotFound desc = could not find container \"fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6\": container with ID starting with fdbff6cb9d3d3972574e1cf561a873de0230d00e918ca5cf6bfad382620a2dc6 not found: ID does not exist" Nov 23 09:40:33 crc kubenswrapper[5028]: I1123 09:40:33.068141 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" path="/var/lib/kubelet/pods/89d270b8-1cad-47e1-8dea-9ae0f41e3bad/volumes" Nov 23 09:41:30 crc kubenswrapper[5028]: I1123 09:41:30.946509 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:41:30 crc kubenswrapper[5028]: I1123 09:41:30.947597 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:42:00 crc kubenswrapper[5028]: I1123 09:42:00.946087 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:42:00 crc kubenswrapper[5028]: I1123 09:42:00.947026 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:42:30 crc kubenswrapper[5028]: I1123 09:42:30.946369 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:42:30 crc kubenswrapper[5028]: I1123 09:42:30.947183 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:42:30 crc kubenswrapper[5028]: I1123 09:42:30.947247 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:42:30 crc kubenswrapper[5028]: I1123 09:42:30.948307 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:42:30 crc kubenswrapper[5028]: I1123 09:42:30.948367 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" gracePeriod=600 Nov 23 09:42:31 crc kubenswrapper[5028]: E1123 09:42:31.078593 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:42:31 crc kubenswrapper[5028]: I1123 09:42:31.562601 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" exitCode=0 Nov 23 09:42:31 crc kubenswrapper[5028]: I1123 09:42:31.562672 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3"} Nov 23 09:42:31 crc kubenswrapper[5028]: I1123 09:42:31.562727 5028 scope.go:117] "RemoveContainer" containerID="c0f1635254f1021f75a78a4d090c542766ebc56f9caf285d4a9adad855a050d2" Nov 23 09:42:31 crc kubenswrapper[5028]: I1123 09:42:31.564227 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:42:31 crc kubenswrapper[5028]: E1123 09:42:31.564617 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:42:47 crc kubenswrapper[5028]: I1123 09:42:47.060817 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:42:47 crc kubenswrapper[5028]: E1123 09:42:47.063010 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:42:59 crc kubenswrapper[5028]: I1123 09:42:59.055045 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:42:59 crc kubenswrapper[5028]: E1123 09:42:59.056530 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:42:59 crc kubenswrapper[5028]: I1123 09:42:59.956131 5028 generic.go:334] "Generic (PLEG): container finished" podID="05b27132-4980-4bfa-97b0-463d53cd4486" containerID="2e82710877c61cb96523d083e22aa92e507ecd439857299b15fbf3fe06e6e1c5" exitCode=0 Nov 23 09:42:59 crc kubenswrapper[5028]: I1123 09:42:59.956199 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" event={"ID":"05b27132-4980-4bfa-97b0-463d53cd4486","Type":"ContainerDied","Data":"2e82710877c61cb96523d083e22aa92e507ecd439857299b15fbf3fe06e6e1c5"} Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.501672 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634410 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-1\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634487 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-combined-ca-bundle\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634612 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ssh-key\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634661 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-inventory\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634681 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-0\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634725 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzl98\" (UniqueName: \"kubernetes.io/projected/05b27132-4980-4bfa-97b0-463d53cd4486-kube-api-access-wzl98\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.634836 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-1\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.635012 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-0\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.635080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-0\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.635108 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-1\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.635128 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ceph\") pod \"05b27132-4980-4bfa-97b0-463d53cd4486\" (UID: \"05b27132-4980-4bfa-97b0-463d53cd4486\") " Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.656168 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b27132-4980-4bfa-97b0-463d53cd4486-kube-api-access-wzl98" (OuterVolumeSpecName: "kube-api-access-wzl98") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "kube-api-access-wzl98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.657410 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ceph" (OuterVolumeSpecName: "ceph") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.660360 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.673417 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.673936 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.674563 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.682610 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-inventory" (OuterVolumeSpecName: "inventory") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.689212 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.691061 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-1" (OuterVolumeSpecName: "nova-cells-global-config-1") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-cells-global-config-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.694156 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.703140 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "05b27132-4980-4bfa-97b0-463d53cd4486" (UID: "05b27132-4980-4bfa-97b0-463d53cd4486"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.747924 5028 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.747984 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.747998 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-1\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748010 5028 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ceph\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748021 5028 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748033 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748047 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748090 5028 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-inventory\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748103 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748115 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzl98\" (UniqueName: \"kubernetes.io/projected/05b27132-4980-4bfa-97b0-463d53cd4486-kube-api-access-wzl98\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.748125 5028 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/05b27132-4980-4bfa-97b0-463d53cd4486-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.985440 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" event={"ID":"05b27132-4980-4bfa-97b0-463d53cd4486","Type":"ContainerDied","Data":"fcb41f8e81e892c730e02254d145a133c2953f4516cfe2b4d8bc64d10a084984"} Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.985493 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcb41f8e81e892c730e02254d145a133c2953f4516cfe2b4d8bc64d10a084984" Nov 23 09:43:01 crc kubenswrapper[5028]: I1123 09:43:01.985701 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj" Nov 23 09:43:12 crc kubenswrapper[5028]: I1123 09:43:12.053887 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:43:12 crc kubenswrapper[5028]: E1123 09:43:12.054848 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:43:26 crc kubenswrapper[5028]: I1123 09:43:26.053852 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:43:26 crc kubenswrapper[5028]: E1123 09:43:26.054589 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:43:40 crc kubenswrapper[5028]: I1123 09:43:40.054063 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:43:40 crc kubenswrapper[5028]: E1123 09:43:40.055047 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.053310 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:43:54 crc kubenswrapper[5028]: E1123 09:43:54.055441 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.098136 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xn8nv"] Nov 23 09:43:54 crc kubenswrapper[5028]: E1123 09:43:54.098731 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="extract-utilities" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.098752 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="extract-utilities" Nov 23 09:43:54 crc kubenswrapper[5028]: E1123 09:43:54.098784 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="extract-content" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.098792 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="extract-content" Nov 23 09:43:54 crc kubenswrapper[5028]: E1123 09:43:54.098818 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b27132-4980-4bfa-97b0-463d53cd4486" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.098826 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b27132-4980-4bfa-97b0-463d53cd4486" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 23 09:43:54 crc kubenswrapper[5028]: E1123 09:43:54.098855 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="registry-server" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.098861 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="registry-server" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.099113 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b27132-4980-4bfa-97b0-463d53cd4486" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.099133 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d270b8-1cad-47e1-8dea-9ae0f41e3bad" containerName="registry-server" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.152523 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xn8nv"] Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.152698 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.258613 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-catalog-content\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.258983 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-utilities\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.259507 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhf6\" (UniqueName: \"kubernetes.io/projected/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-kube-api-access-5nhf6\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.361975 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhf6\" (UniqueName: \"kubernetes.io/projected/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-kube-api-access-5nhf6\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.362118 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-catalog-content\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.362195 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-utilities\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.362811 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-utilities\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.363125 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-catalog-content\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.386032 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhf6\" (UniqueName: \"kubernetes.io/projected/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-kube-api-access-5nhf6\") pod \"certified-operators-xn8nv\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:54 crc kubenswrapper[5028]: I1123 09:43:54.484031 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:43:55 crc kubenswrapper[5028]: I1123 09:43:55.141159 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xn8nv"] Nov 23 09:43:55 crc kubenswrapper[5028]: W1123 09:43:55.148750 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2e6cf47_4f6f_4e67_bbc5_317d4673aa2d.slice/crio-0e6f1f9e2840841ee654cd5513c3b02292e44ba6c15e40a8313af67371ccb7d6 WatchSource:0}: Error finding container 0e6f1f9e2840841ee654cd5513c3b02292e44ba6c15e40a8313af67371ccb7d6: Status 404 returned error can't find the container with id 0e6f1f9e2840841ee654cd5513c3b02292e44ba6c15e40a8313af67371ccb7d6 Nov 23 09:43:55 crc kubenswrapper[5028]: I1123 09:43:55.752832 5028 generic.go:334] "Generic (PLEG): container finished" podID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerID="39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8" exitCode=0 Nov 23 09:43:55 crc kubenswrapper[5028]: I1123 09:43:55.752998 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerDied","Data":"39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8"} Nov 23 09:43:55 crc kubenswrapper[5028]: I1123 09:43:55.753380 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerStarted","Data":"0e6f1f9e2840841ee654cd5513c3b02292e44ba6c15e40a8313af67371ccb7d6"} Nov 23 09:43:55 crc kubenswrapper[5028]: I1123 09:43:55.757109 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:43:56 crc kubenswrapper[5028]: I1123 09:43:56.776318 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerStarted","Data":"33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5"} Nov 23 09:43:57 crc kubenswrapper[5028]: I1123 09:43:57.791687 5028 generic.go:334] "Generic (PLEG): container finished" podID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerID="33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5" exitCode=0 Nov 23 09:43:57 crc kubenswrapper[5028]: I1123 09:43:57.791806 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerDied","Data":"33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5"} Nov 23 09:43:58 crc kubenswrapper[5028]: I1123 09:43:58.810148 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerStarted","Data":"f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6"} Nov 23 09:43:58 crc kubenswrapper[5028]: I1123 09:43:58.839410 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xn8nv" podStartSLOduration=2.114377461 podStartE2EDuration="4.839385234s" podCreationTimestamp="2025-11-23 09:43:54 +0000 UTC" firstStartedPulling="2025-11-23 09:43:55.756787973 +0000 UTC m=+10419.454192752" lastFinishedPulling="2025-11-23 09:43:58.481795736 +0000 UTC m=+10422.179200525" observedRunningTime="2025-11-23 09:43:58.836454291 +0000 UTC m=+10422.533859080" watchObservedRunningTime="2025-11-23 09:43:58.839385234 +0000 UTC m=+10422.536790013" Nov 23 09:44:04 crc kubenswrapper[5028]: I1123 09:44:04.484234 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:44:04 crc kubenswrapper[5028]: I1123 09:44:04.485159 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:44:04 crc kubenswrapper[5028]: I1123 09:44:04.571867 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:44:05 crc kubenswrapper[5028]: I1123 09:44:05.054100 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:44:05 crc kubenswrapper[5028]: E1123 09:44:05.054737 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:44:05 crc kubenswrapper[5028]: I1123 09:44:05.531164 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:44:05 crc kubenswrapper[5028]: I1123 09:44:05.611094 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xn8nv"] Nov 23 09:44:06 crc kubenswrapper[5028]: I1123 09:44:06.951438 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xn8nv" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="registry-server" containerID="cri-o://f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6" gracePeriod=2 Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.565228 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.655332 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-catalog-content\") pod \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.655680 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhf6\" (UniqueName: \"kubernetes.io/projected/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-kube-api-access-5nhf6\") pod \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.655766 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-utilities\") pod \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\" (UID: \"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d\") " Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.656693 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-utilities" (OuterVolumeSpecName: "utilities") pod "f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" (UID: "f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.657267 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.663184 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-kube-api-access-5nhf6" (OuterVolumeSpecName: "kube-api-access-5nhf6") pod "f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" (UID: "f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d"). InnerVolumeSpecName "kube-api-access-5nhf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.725129 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" (UID: "f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.759491 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhf6\" (UniqueName: \"kubernetes.io/projected/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-kube-api-access-5nhf6\") on node \"crc\" DevicePath \"\"" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.759543 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.970596 5028 generic.go:334] "Generic (PLEG): container finished" podID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerID="f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6" exitCode=0 Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.970685 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerDied","Data":"f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6"} Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.970739 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xn8nv" event={"ID":"f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d","Type":"ContainerDied","Data":"0e6f1f9e2840841ee654cd5513c3b02292e44ba6c15e40a8313af67371ccb7d6"} Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.970804 5028 scope.go:117] "RemoveContainer" containerID="f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6" Nov 23 09:44:07 crc kubenswrapper[5028]: I1123 09:44:07.971196 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xn8nv" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.017656 5028 scope.go:117] "RemoveContainer" containerID="33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.055460 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xn8nv"] Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.067040 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xn8nv"] Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.699905 5028 scope.go:117] "RemoveContainer" containerID="39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.767898 5028 scope.go:117] "RemoveContainer" containerID="f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6" Nov 23 09:44:08 crc kubenswrapper[5028]: E1123 09:44:08.768670 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6\": container with ID starting with f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6 not found: ID does not exist" containerID="f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.768720 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6"} err="failed to get container status \"f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6\": rpc error: code = NotFound desc = could not find container \"f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6\": container with ID starting with f809b10dfc7cfdbcf8f841cdf67550b5dd2432ef4cb5b4c4c686c3655553a4c6 not found: ID does not exist" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.768761 5028 scope.go:117] "RemoveContainer" containerID="33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5" Nov 23 09:44:08 crc kubenswrapper[5028]: E1123 09:44:08.769134 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5\": container with ID starting with 33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5 not found: ID does not exist" containerID="33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.769167 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5"} err="failed to get container status \"33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5\": rpc error: code = NotFound desc = could not find container \"33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5\": container with ID starting with 33d00c4c98463b6090926617465b2bf1c531d5dfab71a641fd49a757647d85d5 not found: ID does not exist" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.769187 5028 scope.go:117] "RemoveContainer" containerID="39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8" Nov 23 09:44:08 crc kubenswrapper[5028]: E1123 09:44:08.769697 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8\": container with ID starting with 39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8 not found: ID does not exist" containerID="39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8" Nov 23 09:44:08 crc kubenswrapper[5028]: I1123 09:44:08.769779 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8"} err="failed to get container status \"39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8\": rpc error: code = NotFound desc = could not find container \"39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8\": container with ID starting with 39ee5eca0f860520842a28de0dcc8e64d24236be60ec02b1e79643aabe4afea8 not found: ID does not exist" Nov 23 09:44:09 crc kubenswrapper[5028]: I1123 09:44:09.071809 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" path="/var/lib/kubelet/pods/f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d/volumes" Nov 23 09:44:17 crc kubenswrapper[5028]: E1123 09:44:17.218384 5028 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:37926->38.102.83.145:39767: write tcp 38.102.83.145:37926->38.102.83.145:39767: write: broken pipe Nov 23 09:44:19 crc kubenswrapper[5028]: I1123 09:44:19.054529 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:44:19 crc kubenswrapper[5028]: E1123 09:44:19.055324 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:44:33 crc kubenswrapper[5028]: I1123 09:44:33.053917 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:44:33 crc kubenswrapper[5028]: E1123 09:44:33.055184 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:44:39 crc kubenswrapper[5028]: I1123 09:44:39.102007 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 09:44:39 crc kubenswrapper[5028]: I1123 09:44:39.103427 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="db7dc982-3c93-4b4a-a2f0-f74c5509fd77" containerName="adoption" containerID="cri-o://67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555" gracePeriod=30 Nov 23 09:44:47 crc kubenswrapper[5028]: I1123 09:44:47.076242 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:44:47 crc kubenswrapper[5028]: E1123 09:44:47.078114 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.622000 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dtvzq"] Nov 23 09:44:50 crc kubenswrapper[5028]: E1123 09:44:50.625155 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="extract-utilities" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.625486 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="extract-utilities" Nov 23 09:44:50 crc kubenswrapper[5028]: E1123 09:44:50.625583 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="extract-content" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.625661 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="extract-content" Nov 23 09:44:50 crc kubenswrapper[5028]: E1123 09:44:50.625786 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="registry-server" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.625871 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="registry-server" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.626282 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6cf47-4f6f-4e67-bbc5-317d4673aa2d" containerName="registry-server" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.628766 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.667759 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtvzq"] Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.669895 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8z5z\" (UniqueName: \"kubernetes.io/projected/63daeb44-4577-464d-aeea-b3d86a21845f-kube-api-access-v8z5z\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.670180 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-utilities\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.670360 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-catalog-content\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.773219 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8z5z\" (UniqueName: \"kubernetes.io/projected/63daeb44-4577-464d-aeea-b3d86a21845f-kube-api-access-v8z5z\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.773287 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-utilities\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.773341 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-catalog-content\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.773876 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-catalog-content\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.774118 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-utilities\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.801930 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8z5z\" (UniqueName: \"kubernetes.io/projected/63daeb44-4577-464d-aeea-b3d86a21845f-kube-api-access-v8z5z\") pod \"redhat-marketplace-dtvzq\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:50 crc kubenswrapper[5028]: I1123 09:44:50.972223 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:44:51 crc kubenswrapper[5028]: I1123 09:44:51.510405 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtvzq"] Nov 23 09:44:51 crc kubenswrapper[5028]: W1123 09:44:51.524299 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63daeb44_4577_464d_aeea_b3d86a21845f.slice/crio-882943baf4ef2943711c99dc77a7f33ec69515e318205b12fa3bba46c619d58f WatchSource:0}: Error finding container 882943baf4ef2943711c99dc77a7f33ec69515e318205b12fa3bba46c619d58f: Status 404 returned error can't find the container with id 882943baf4ef2943711c99dc77a7f33ec69515e318205b12fa3bba46c619d58f Nov 23 09:44:51 crc kubenswrapper[5028]: I1123 09:44:51.700094 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtvzq" event={"ID":"63daeb44-4577-464d-aeea-b3d86a21845f","Type":"ContainerStarted","Data":"882943baf4ef2943711c99dc77a7f33ec69515e318205b12fa3bba46c619d58f"} Nov 23 09:44:52 crc kubenswrapper[5028]: I1123 09:44:52.716621 5028 generic.go:334] "Generic (PLEG): container finished" podID="63daeb44-4577-464d-aeea-b3d86a21845f" containerID="d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c" exitCode=0 Nov 23 09:44:52 crc kubenswrapper[5028]: I1123 09:44:52.716716 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtvzq" event={"ID":"63daeb44-4577-464d-aeea-b3d86a21845f","Type":"ContainerDied","Data":"d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c"} Nov 23 09:44:53 crc kubenswrapper[5028]: I1123 09:44:53.734791 5028 generic.go:334] "Generic (PLEG): container finished" podID="63daeb44-4577-464d-aeea-b3d86a21845f" containerID="cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932" exitCode=0 Nov 23 09:44:53 crc kubenswrapper[5028]: I1123 09:44:53.734891 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtvzq" event={"ID":"63daeb44-4577-464d-aeea-b3d86a21845f","Type":"ContainerDied","Data":"cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932"} Nov 23 09:44:54 crc kubenswrapper[5028]: I1123 09:44:54.750375 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtvzq" event={"ID":"63daeb44-4577-464d-aeea-b3d86a21845f","Type":"ContainerStarted","Data":"cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9"} Nov 23 09:44:54 crc kubenswrapper[5028]: I1123 09:44:54.781278 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dtvzq" podStartSLOduration=3.358205758 podStartE2EDuration="4.781245005s" podCreationTimestamp="2025-11-23 09:44:50 +0000 UTC" firstStartedPulling="2025-11-23 09:44:52.720587533 +0000 UTC m=+10476.417992322" lastFinishedPulling="2025-11-23 09:44:54.14362678 +0000 UTC m=+10477.841031569" observedRunningTime="2025-11-23 09:44:54.779082971 +0000 UTC m=+10478.476487750" watchObservedRunningTime="2025-11-23 09:44:54.781245005 +0000 UTC m=+10478.478649784" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.058186 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:45:00 crc kubenswrapper[5028]: E1123 09:45:00.059707 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.223786 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk"] Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.231644 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.236121 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.236478 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.245549 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk"] Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.374164 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbcb606b-4975-444e-b3a8-9d37305d2bf4-secret-volume\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.374427 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbcb606b-4975-444e-b3a8-9d37305d2bf4-config-volume\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.374715 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q9mj\" (UniqueName: \"kubernetes.io/projected/dbcb606b-4975-444e-b3a8-9d37305d2bf4-kube-api-access-6q9mj\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.477389 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbcb606b-4975-444e-b3a8-9d37305d2bf4-config-volume\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.477844 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q9mj\" (UniqueName: \"kubernetes.io/projected/dbcb606b-4975-444e-b3a8-9d37305d2bf4-kube-api-access-6q9mj\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.478016 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbcb606b-4975-444e-b3a8-9d37305d2bf4-secret-volume\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.479222 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbcb606b-4975-444e-b3a8-9d37305d2bf4-config-volume\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.487581 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbcb606b-4975-444e-b3a8-9d37305d2bf4-secret-volume\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.507145 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q9mj\" (UniqueName: \"kubernetes.io/projected/dbcb606b-4975-444e-b3a8-9d37305d2bf4-kube-api-access-6q9mj\") pod \"collect-profiles-29398185-bf2zk\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.557574 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.976713 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:45:00 crc kubenswrapper[5028]: I1123 09:45:00.977300 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:45:01 crc kubenswrapper[5028]: I1123 09:45:01.085647 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:45:01 crc kubenswrapper[5028]: W1123 09:45:01.098585 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbcb606b_4975_444e_b3a8_9d37305d2bf4.slice/crio-7ab16b409bf2f2a9ce8aef2f37cf753cbf4c38927b97118de1ff56669116046e WatchSource:0}: Error finding container 7ab16b409bf2f2a9ce8aef2f37cf753cbf4c38927b97118de1ff56669116046e: Status 404 returned error can't find the container with id 7ab16b409bf2f2a9ce8aef2f37cf753cbf4c38927b97118de1ff56669116046e Nov 23 09:45:01 crc kubenswrapper[5028]: I1123 09:45:01.117206 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk"] Nov 23 09:45:01 crc kubenswrapper[5028]: I1123 09:45:01.854580 5028 generic.go:334] "Generic (PLEG): container finished" podID="dbcb606b-4975-444e-b3a8-9d37305d2bf4" containerID="c9e7ac748e82067a3235f35ea2e5e695f10d6613f4a341237327d13bce27527f" exitCode=0 Nov 23 09:45:01 crc kubenswrapper[5028]: I1123 09:45:01.854660 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" event={"ID":"dbcb606b-4975-444e-b3a8-9d37305d2bf4","Type":"ContainerDied","Data":"c9e7ac748e82067a3235f35ea2e5e695f10d6613f4a341237327d13bce27527f"} Nov 23 09:45:01 crc kubenswrapper[5028]: I1123 09:45:01.854896 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" event={"ID":"dbcb606b-4975-444e-b3a8-9d37305d2bf4","Type":"ContainerStarted","Data":"7ab16b409bf2f2a9ce8aef2f37cf753cbf4c38927b97118de1ff56669116046e"} Nov 23 09:45:01 crc kubenswrapper[5028]: I1123 09:45:01.947576 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:45:02 crc kubenswrapper[5028]: I1123 09:45:02.046012 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtvzq"] Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.712132 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.808091 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbcb606b-4975-444e-b3a8-9d37305d2bf4-config-volume\") pod \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.808357 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbcb606b-4975-444e-b3a8-9d37305d2bf4-secret-volume\") pod \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.808503 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q9mj\" (UniqueName: \"kubernetes.io/projected/dbcb606b-4975-444e-b3a8-9d37305d2bf4-kube-api-access-6q9mj\") pod \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\" (UID: \"dbcb606b-4975-444e-b3a8-9d37305d2bf4\") " Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.809656 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcb606b-4975-444e-b3a8-9d37305d2bf4-config-volume" (OuterVolumeSpecName: "config-volume") pod "dbcb606b-4975-444e-b3a8-9d37305d2bf4" (UID: "dbcb606b-4975-444e-b3a8-9d37305d2bf4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.852253 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcb606b-4975-444e-b3a8-9d37305d2bf4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dbcb606b-4975-444e-b3a8-9d37305d2bf4" (UID: "dbcb606b-4975-444e-b3a8-9d37305d2bf4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.852649 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcb606b-4975-444e-b3a8-9d37305d2bf4-kube-api-access-6q9mj" (OuterVolumeSpecName: "kube-api-access-6q9mj") pod "dbcb606b-4975-444e-b3a8-9d37305d2bf4" (UID: "dbcb606b-4975-444e-b3a8-9d37305d2bf4"). InnerVolumeSpecName "kube-api-access-6q9mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.881914 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dtvzq" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="registry-server" containerID="cri-o://cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9" gracePeriod=2 Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.882312 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.890198 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398185-bf2zk" event={"ID":"dbcb606b-4975-444e-b3a8-9d37305d2bf4","Type":"ContainerDied","Data":"7ab16b409bf2f2a9ce8aef2f37cf753cbf4c38927b97118de1ff56669116046e"} Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.890233 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ab16b409bf2f2a9ce8aef2f37cf753cbf4c38927b97118de1ff56669116046e" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.911326 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbcb606b-4975-444e-b3a8-9d37305d2bf4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.911353 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q9mj\" (UniqueName: \"kubernetes.io/projected/dbcb606b-4975-444e-b3a8-9d37305d2bf4-kube-api-access-6q9mj\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:03 crc kubenswrapper[5028]: I1123 09:45:03.911364 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbcb606b-4975-444e-b3a8-9d37305d2bf4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.333632 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.422907 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8z5z\" (UniqueName: \"kubernetes.io/projected/63daeb44-4577-464d-aeea-b3d86a21845f-kube-api-access-v8z5z\") pod \"63daeb44-4577-464d-aeea-b3d86a21845f\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.422993 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-catalog-content\") pod \"63daeb44-4577-464d-aeea-b3d86a21845f\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.423296 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-utilities\") pod \"63daeb44-4577-464d-aeea-b3d86a21845f\" (UID: \"63daeb44-4577-464d-aeea-b3d86a21845f\") " Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.425071 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-utilities" (OuterVolumeSpecName: "utilities") pod "63daeb44-4577-464d-aeea-b3d86a21845f" (UID: "63daeb44-4577-464d-aeea-b3d86a21845f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.430306 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63daeb44-4577-464d-aeea-b3d86a21845f-kube-api-access-v8z5z" (OuterVolumeSpecName: "kube-api-access-v8z5z") pod "63daeb44-4577-464d-aeea-b3d86a21845f" (UID: "63daeb44-4577-464d-aeea-b3d86a21845f"). InnerVolumeSpecName "kube-api-access-v8z5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.443321 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63daeb44-4577-464d-aeea-b3d86a21845f" (UID: "63daeb44-4577-464d-aeea-b3d86a21845f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.526738 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.526771 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8z5z\" (UniqueName: \"kubernetes.io/projected/63daeb44-4577-464d-aeea-b3d86a21845f-kube-api-access-v8z5z\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.526781 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63daeb44-4577-464d-aeea-b3d86a21845f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.818249 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd"] Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.830223 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398140-7kbtd"] Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.895117 5028 generic.go:334] "Generic (PLEG): container finished" podID="63daeb44-4577-464d-aeea-b3d86a21845f" containerID="cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9" exitCode=0 Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.895181 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtvzq" event={"ID":"63daeb44-4577-464d-aeea-b3d86a21845f","Type":"ContainerDied","Data":"cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9"} Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.895212 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dtvzq" event={"ID":"63daeb44-4577-464d-aeea-b3d86a21845f","Type":"ContainerDied","Data":"882943baf4ef2943711c99dc77a7f33ec69515e318205b12fa3bba46c619d58f"} Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.895234 5028 scope.go:117] "RemoveContainer" containerID="cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.895380 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dtvzq" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.918009 5028 scope.go:117] "RemoveContainer" containerID="cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.943591 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtvzq"] Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.956163 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dtvzq"] Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.961898 5028 scope.go:117] "RemoveContainer" containerID="d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.998209 5028 scope.go:117] "RemoveContainer" containerID="cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9" Nov 23 09:45:04 crc kubenswrapper[5028]: E1123 09:45:04.999051 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9\": container with ID starting with cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9 not found: ID does not exist" containerID="cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.999136 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9"} err="failed to get container status \"cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9\": rpc error: code = NotFound desc = could not find container \"cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9\": container with ID starting with cea763ecffae1941bd6510b6edd5ba883a0ba478dc2626bcf8f205acc7231af9 not found: ID does not exist" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.999186 5028 scope.go:117] "RemoveContainer" containerID="cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932" Nov 23 09:45:04 crc kubenswrapper[5028]: E1123 09:45:04.999594 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932\": container with ID starting with cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932 not found: ID does not exist" containerID="cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.999696 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932"} err="failed to get container status \"cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932\": rpc error: code = NotFound desc = could not find container \"cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932\": container with ID starting with cfd26c079585c8f9fc7c50d202aef4d5058d8e3aa3b7b6bfc120e1060c0b2932 not found: ID does not exist" Nov 23 09:45:04 crc kubenswrapper[5028]: I1123 09:45:04.999783 5028 scope.go:117] "RemoveContainer" containerID="d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c" Nov 23 09:45:05 crc kubenswrapper[5028]: E1123 09:45:05.000205 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c\": container with ID starting with d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c not found: ID does not exist" containerID="d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c" Nov 23 09:45:05 crc kubenswrapper[5028]: I1123 09:45:05.000282 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c"} err="failed to get container status \"d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c\": rpc error: code = NotFound desc = could not find container \"d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c\": container with ID starting with d8e582cbf2d6e3211d17d3f6fdf9fbcfe3b9ce9fc4979f2c1b2a5e0edc887c0c not found: ID does not exist" Nov 23 09:45:05 crc kubenswrapper[5028]: I1123 09:45:05.071929 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c902c23-4d0e-451c-809d-d26e6ce797fe" path="/var/lib/kubelet/pods/3c902c23-4d0e-451c-809d-d26e6ce797fe/volumes" Nov 23 09:45:05 crc kubenswrapper[5028]: I1123 09:45:05.072882 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" path="/var/lib/kubelet/pods/63daeb44-4577-464d-aeea-b3d86a21845f/volumes" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.675878 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.779999 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") pod \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.780237 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ck9h\" (UniqueName: \"kubernetes.io/projected/db7dc982-3c93-4b4a-a2f0-f74c5509fd77-kube-api-access-5ck9h\") pod \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\" (UID: \"db7dc982-3c93-4b4a-a2f0-f74c5509fd77\") " Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.788065 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7dc982-3c93-4b4a-a2f0-f74c5509fd77-kube-api-access-5ck9h" (OuterVolumeSpecName: "kube-api-access-5ck9h") pod "db7dc982-3c93-4b4a-a2f0-f74c5509fd77" (UID: "db7dc982-3c93-4b4a-a2f0-f74c5509fd77"). InnerVolumeSpecName "kube-api-access-5ck9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.812417 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f" (OuterVolumeSpecName: "mariadb-data") pod "db7dc982-3c93-4b4a-a2f0-f74c5509fd77" (UID: "db7dc982-3c93-4b4a-a2f0-f74c5509fd77"). InnerVolumeSpecName "pvc-096c63b8-41eb-4c75-9660-004bbe65182f". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.882741 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ck9h\" (UniqueName: \"kubernetes.io/projected/db7dc982-3c93-4b4a-a2f0-f74c5509fd77-kube-api-access-5ck9h\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.882858 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-096c63b8-41eb-4c75-9660-004bbe65182f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") on node \"crc\" " Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.912967 5028 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.914615 5028 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-096c63b8-41eb-4c75-9660-004bbe65182f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f") on node "crc" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.983501 5028 generic.go:334] "Generic (PLEG): container finished" podID="db7dc982-3c93-4b4a-a2f0-f74c5509fd77" containerID="67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555" exitCode=137 Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.983593 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"db7dc982-3c93-4b4a-a2f0-f74c5509fd77","Type":"ContainerDied","Data":"67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555"} Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.984407 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"db7dc982-3c93-4b4a-a2f0-f74c5509fd77","Type":"ContainerDied","Data":"126fcafad558672dae76cacba097b694268ac3cce46fba63f720685679c200c5"} Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.983626 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.984438 5028 scope.go:117] "RemoveContainer" containerID="67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555" Nov 23 09:45:09 crc kubenswrapper[5028]: I1123 09:45:09.986263 5028 reconciler_common.go:293] "Volume detached for volume \"pvc-096c63b8-41eb-4c75-9660-004bbe65182f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-096c63b8-41eb-4c75-9660-004bbe65182f\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:10 crc kubenswrapper[5028]: I1123 09:45:10.035797 5028 scope.go:117] "RemoveContainer" containerID="67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555" Nov 23 09:45:10 crc kubenswrapper[5028]: E1123 09:45:10.039052 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555\": container with ID starting with 67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555 not found: ID does not exist" containerID="67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555" Nov 23 09:45:10 crc kubenswrapper[5028]: I1123 09:45:10.039526 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555"} err="failed to get container status \"67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555\": rpc error: code = NotFound desc = could not find container \"67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555\": container with ID starting with 67bd47a777258f6c1c5e637bfee46cfd36f57b9edbc59cbd4ad4eb15d4a7d555 not found: ID does not exist" Nov 23 09:45:10 crc kubenswrapper[5028]: I1123 09:45:10.039095 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 09:45:10 crc kubenswrapper[5028]: I1123 09:45:10.050061 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 23 09:45:10 crc kubenswrapper[5028]: I1123 09:45:10.811878 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 09:45:10 crc kubenswrapper[5028]: I1123 09:45:10.812795 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="11216e20-1103-4fa8-b4fb-df9556d9114b" containerName="adoption" containerID="cri-o://61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198" gracePeriod=30 Nov 23 09:45:11 crc kubenswrapper[5028]: I1123 09:45:11.054303 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:45:11 crc kubenswrapper[5028]: E1123 09:45:11.054714 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:45:11 crc kubenswrapper[5028]: I1123 09:45:11.074832 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7dc982-3c93-4b4a-a2f0-f74c5509fd77" path="/var/lib/kubelet/pods/db7dc982-3c93-4b4a-a2f0-f74c5509fd77/volumes" Nov 23 09:45:25 crc kubenswrapper[5028]: I1123 09:45:25.053249 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:45:25 crc kubenswrapper[5028]: E1123 09:45:25.055108 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:45:33 crc kubenswrapper[5028]: I1123 09:45:33.654278 5028 scope.go:117] "RemoveContainer" containerID="f7a46fe3d6c0c875fa496bafab5494c80f159b333ae1cc5a254ab325e058aac8" Nov 23 09:45:41 crc kubenswrapper[5028]: I1123 09:45:41.054044 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:45:41 crc kubenswrapper[5028]: E1123 09:45:41.055168 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.211391 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.351624 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/11216e20-1103-4fa8-b4fb-df9556d9114b-ovn-data-cert\") pod \"11216e20-1103-4fa8-b4fb-df9556d9114b\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.354133 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") pod \"11216e20-1103-4fa8-b4fb-df9556d9114b\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.354382 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvp6f\" (UniqueName: \"kubernetes.io/projected/11216e20-1103-4fa8-b4fb-df9556d9114b-kube-api-access-hvp6f\") pod \"11216e20-1103-4fa8-b4fb-df9556d9114b\" (UID: \"11216e20-1103-4fa8-b4fb-df9556d9114b\") " Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.360706 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11216e20-1103-4fa8-b4fb-df9556d9114b-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "11216e20-1103-4fa8-b4fb-df9556d9114b" (UID: "11216e20-1103-4fa8-b4fb-df9556d9114b"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.361633 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11216e20-1103-4fa8-b4fb-df9556d9114b-kube-api-access-hvp6f" (OuterVolumeSpecName: "kube-api-access-hvp6f") pod "11216e20-1103-4fa8-b4fb-df9556d9114b" (UID: "11216e20-1103-4fa8-b4fb-df9556d9114b"). InnerVolumeSpecName "kube-api-access-hvp6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.376994 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1" (OuterVolumeSpecName: "ovn-data") pod "11216e20-1103-4fa8-b4fb-df9556d9114b" (UID: "11216e20-1103-4fa8-b4fb-df9556d9114b"). InnerVolumeSpecName "pvc-3289866c-65d3-49d2-b802-8c66c5d523d1". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.457329 5028 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/11216e20-1103-4fa8-b4fb-df9556d9114b-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.457607 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") on node \"crc\" " Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.457716 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvp6f\" (UniqueName: \"kubernetes.io/projected/11216e20-1103-4fa8-b4fb-df9556d9114b-kube-api-access-hvp6f\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.491495 5028 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.491830 5028 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3289866c-65d3-49d2-b802-8c66c5d523d1" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1") on node "crc" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.540472 5028 generic.go:334] "Generic (PLEG): container finished" podID="11216e20-1103-4fa8-b4fb-df9556d9114b" containerID="61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198" exitCode=137 Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.540555 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"11216e20-1103-4fa8-b4fb-df9556d9114b","Type":"ContainerDied","Data":"61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198"} Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.540656 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"11216e20-1103-4fa8-b4fb-df9556d9114b","Type":"ContainerDied","Data":"aa318fc1e873714931c593ccfb5b35b87760de0cc849ac54e0cfb70bf058a48f"} Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.540690 5028 scope.go:117] "RemoveContainer" containerID="61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.540865 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.560356 5028 reconciler_common.go:293] "Volume detached for volume \"pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3289866c-65d3-49d2-b802-8c66c5d523d1\") on node \"crc\" DevicePath \"\"" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.582799 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.592617 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.592684 5028 scope.go:117] "RemoveContainer" containerID="61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198" Nov 23 09:45:42 crc kubenswrapper[5028]: E1123 09:45:42.593317 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198\": container with ID starting with 61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198 not found: ID does not exist" containerID="61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198" Nov 23 09:45:42 crc kubenswrapper[5028]: I1123 09:45:42.593351 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198"} err="failed to get container status \"61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198\": rpc error: code = NotFound desc = could not find container \"61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198\": container with ID starting with 61ce58d5b1ab3d68271d91b9addd3487bac016c9ceae779c6619ae2a3f393198 not found: ID does not exist" Nov 23 09:45:43 crc kubenswrapper[5028]: I1123 09:45:43.069019 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11216e20-1103-4fa8-b4fb-df9556d9114b" path="/var/lib/kubelet/pods/11216e20-1103-4fa8-b4fb-df9556d9114b/volumes" Nov 23 09:45:55 crc kubenswrapper[5028]: I1123 09:45:55.054310 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:45:55 crc kubenswrapper[5028]: E1123 09:45:55.057239 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:46:09 crc kubenswrapper[5028]: I1123 09:46:09.054070 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:46:09 crc kubenswrapper[5028]: E1123 09:46:09.055768 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:46:20 crc kubenswrapper[5028]: I1123 09:46:20.054322 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:46:20 crc kubenswrapper[5028]: E1123 09:46:20.055381 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:46:35 crc kubenswrapper[5028]: I1123 09:46:35.054731 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:46:35 crc kubenswrapper[5028]: E1123 09:46:35.055801 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:46:49 crc kubenswrapper[5028]: I1123 09:46:49.054077 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:46:49 crc kubenswrapper[5028]: E1123 09:46:49.055904 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:47:00 crc kubenswrapper[5028]: I1123 09:47:00.054158 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:47:00 crc kubenswrapper[5028]: E1123 09:47:00.055172 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:47:02 crc kubenswrapper[5028]: I1123 09:47:02.161539 5028 trace.go:236] Trace[627483801]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (23-Nov-2025 09:47:01.088) (total time: 1073ms): Nov 23 09:47:02 crc kubenswrapper[5028]: Trace[627483801]: [1.073193453s] [1.073193453s] END Nov 23 09:47:15 crc kubenswrapper[5028]: I1123 09:47:15.053501 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:47:15 crc kubenswrapper[5028]: E1123 09:47:15.054430 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.373457 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-psjvl"] Nov 23 09:47:26 crc kubenswrapper[5028]: E1123 09:47:26.374711 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db7dc982-3c93-4b4a-a2f0-f74c5509fd77" containerName="adoption" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.374728 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="db7dc982-3c93-4b4a-a2f0-f74c5509fd77" containerName="adoption" Nov 23 09:47:26 crc kubenswrapper[5028]: E1123 09:47:26.374747 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11216e20-1103-4fa8-b4fb-df9556d9114b" containerName="adoption" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.374756 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="11216e20-1103-4fa8-b4fb-df9556d9114b" containerName="adoption" Nov 23 09:47:26 crc kubenswrapper[5028]: E1123 09:47:26.374777 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="extract-utilities" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.374786 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="extract-utilities" Nov 23 09:47:26 crc kubenswrapper[5028]: E1123 09:47:26.374822 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcb606b-4975-444e-b3a8-9d37305d2bf4" containerName="collect-profiles" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.374831 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcb606b-4975-444e-b3a8-9d37305d2bf4" containerName="collect-profiles" Nov 23 09:47:26 crc kubenswrapper[5028]: E1123 09:47:26.374847 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="registry-server" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.374856 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="registry-server" Nov 23 09:47:26 crc kubenswrapper[5028]: E1123 09:47:26.374882 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="extract-content" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.374891 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="extract-content" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.375266 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcb606b-4975-444e-b3a8-9d37305d2bf4" containerName="collect-profiles" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.375282 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="63daeb44-4577-464d-aeea-b3d86a21845f" containerName="registry-server" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.375298 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="11216e20-1103-4fa8-b4fb-df9556d9114b" containerName="adoption" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.375317 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="db7dc982-3c93-4b4a-a2f0-f74c5509fd77" containerName="adoption" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.377669 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.384097 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psjvl"] Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.472775 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-utilities\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.472878 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28mcc\" (UniqueName: \"kubernetes.io/projected/95973902-bbab-4661-8f7e-a23332f57df5-kube-api-access-28mcc\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.472905 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-catalog-content\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.575518 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28mcc\" (UniqueName: \"kubernetes.io/projected/95973902-bbab-4661-8f7e-a23332f57df5-kube-api-access-28mcc\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.575570 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-catalog-content\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.575749 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-utilities\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.576165 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-catalog-content\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.576241 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-utilities\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.600910 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28mcc\" (UniqueName: \"kubernetes.io/projected/95973902-bbab-4661-8f7e-a23332f57df5-kube-api-access-28mcc\") pod \"community-operators-psjvl\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:26 crc kubenswrapper[5028]: I1123 09:47:26.712494 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:27 crc kubenswrapper[5028]: I1123 09:47:27.266559 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-psjvl"] Nov 23 09:47:28 crc kubenswrapper[5028]: I1123 09:47:28.054610 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:47:28 crc kubenswrapper[5028]: E1123 09:47:28.055467 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:47:28 crc kubenswrapper[5028]: I1123 09:47:28.086172 5028 generic.go:334] "Generic (PLEG): container finished" podID="95973902-bbab-4661-8f7e-a23332f57df5" containerID="1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4" exitCode=0 Nov 23 09:47:28 crc kubenswrapper[5028]: I1123 09:47:28.086267 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerDied","Data":"1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4"} Nov 23 09:47:28 crc kubenswrapper[5028]: I1123 09:47:28.086351 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerStarted","Data":"d65b0eb1de8e1b6312992c344e67c77dfe6de6845f81db072578e93fc7f33abe"} Nov 23 09:47:29 crc kubenswrapper[5028]: I1123 09:47:29.112124 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerStarted","Data":"37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c"} Nov 23 09:47:30 crc kubenswrapper[5028]: I1123 09:47:30.134575 5028 generic.go:334] "Generic (PLEG): container finished" podID="95973902-bbab-4661-8f7e-a23332f57df5" containerID="37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c" exitCode=0 Nov 23 09:47:30 crc kubenswrapper[5028]: I1123 09:47:30.134798 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerDied","Data":"37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c"} Nov 23 09:47:32 crc kubenswrapper[5028]: I1123 09:47:32.167050 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerStarted","Data":"7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b"} Nov 23 09:47:32 crc kubenswrapper[5028]: I1123 09:47:32.205762 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-psjvl" podStartSLOduration=3.643007963 podStartE2EDuration="6.205731228s" podCreationTimestamp="2025-11-23 09:47:26 +0000 UTC" firstStartedPulling="2025-11-23 09:47:28.089157501 +0000 UTC m=+10631.786562280" lastFinishedPulling="2025-11-23 09:47:30.651880746 +0000 UTC m=+10634.349285545" observedRunningTime="2025-11-23 09:47:32.195984446 +0000 UTC m=+10635.893389295" watchObservedRunningTime="2025-11-23 09:47:32.205731228 +0000 UTC m=+10635.903136017" Nov 23 09:47:36 crc kubenswrapper[5028]: I1123 09:47:36.713912 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:36 crc kubenswrapper[5028]: I1123 09:47:36.714850 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:36 crc kubenswrapper[5028]: I1123 09:47:36.787434 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:37 crc kubenswrapper[5028]: I1123 09:47:37.330327 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:37 crc kubenswrapper[5028]: I1123 09:47:37.417134 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psjvl"] Nov 23 09:47:39 crc kubenswrapper[5028]: I1123 09:47:39.452194 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-psjvl" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="registry-server" containerID="cri-o://7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b" gracePeriod=2 Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.057089 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.237264 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28mcc\" (UniqueName: \"kubernetes.io/projected/95973902-bbab-4661-8f7e-a23332f57df5-kube-api-access-28mcc\") pod \"95973902-bbab-4661-8f7e-a23332f57df5\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.237685 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-utilities\") pod \"95973902-bbab-4661-8f7e-a23332f57df5\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.237780 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-catalog-content\") pod \"95973902-bbab-4661-8f7e-a23332f57df5\" (UID: \"95973902-bbab-4661-8f7e-a23332f57df5\") " Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.239087 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-utilities" (OuterVolumeSpecName: "utilities") pod "95973902-bbab-4661-8f7e-a23332f57df5" (UID: "95973902-bbab-4661-8f7e-a23332f57df5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.267164 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95973902-bbab-4661-8f7e-a23332f57df5-kube-api-access-28mcc" (OuterVolumeSpecName: "kube-api-access-28mcc") pod "95973902-bbab-4661-8f7e-a23332f57df5" (UID: "95973902-bbab-4661-8f7e-a23332f57df5"). InnerVolumeSpecName "kube-api-access-28mcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.341252 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28mcc\" (UniqueName: \"kubernetes.io/projected/95973902-bbab-4661-8f7e-a23332f57df5-kube-api-access-28mcc\") on node \"crc\" DevicePath \"\"" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.341301 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.348367 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95973902-bbab-4661-8f7e-a23332f57df5" (UID: "95973902-bbab-4661-8f7e-a23332f57df5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.443666 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95973902-bbab-4661-8f7e-a23332f57df5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.486350 5028 generic.go:334] "Generic (PLEG): container finished" podID="95973902-bbab-4661-8f7e-a23332f57df5" containerID="7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b" exitCode=0 Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.486459 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerDied","Data":"7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b"} Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.486513 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-psjvl" event={"ID":"95973902-bbab-4661-8f7e-a23332f57df5","Type":"ContainerDied","Data":"d65b0eb1de8e1b6312992c344e67c77dfe6de6845f81db072578e93fc7f33abe"} Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.486551 5028 scope.go:117] "RemoveContainer" containerID="7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.486562 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-psjvl" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.518168 5028 scope.go:117] "RemoveContainer" containerID="37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.543749 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-psjvl"] Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.555905 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-psjvl"] Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.574847 5028 scope.go:117] "RemoveContainer" containerID="1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.640593 5028 scope.go:117] "RemoveContainer" containerID="7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b" Nov 23 09:47:40 crc kubenswrapper[5028]: E1123 09:47:40.641541 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b\": container with ID starting with 7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b not found: ID does not exist" containerID="7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.641583 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b"} err="failed to get container status \"7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b\": rpc error: code = NotFound desc = could not find container \"7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b\": container with ID starting with 7a3aced3c0f8610c142bd98e8d1392463e147eb3e05282052d31b7e9d2d0b16b not found: ID does not exist" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.641613 5028 scope.go:117] "RemoveContainer" containerID="37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c" Nov 23 09:47:40 crc kubenswrapper[5028]: E1123 09:47:40.641991 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c\": container with ID starting with 37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c not found: ID does not exist" containerID="37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.642126 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c"} err="failed to get container status \"37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c\": rpc error: code = NotFound desc = could not find container \"37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c\": container with ID starting with 37ae78191e28eaf7074a8f571a0bc1ea380d8c1f1dc5fb883140fc2e9cc6870c not found: ID does not exist" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.642228 5028 scope.go:117] "RemoveContainer" containerID="1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4" Nov 23 09:47:40 crc kubenswrapper[5028]: E1123 09:47:40.642665 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4\": container with ID starting with 1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4 not found: ID does not exist" containerID="1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4" Nov 23 09:47:40 crc kubenswrapper[5028]: I1123 09:47:40.642720 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4"} err="failed to get container status \"1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4\": rpc error: code = NotFound desc = could not find container \"1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4\": container with ID starting with 1f4e7b94a5212eca30325b3d2807551ca8248c58d820bde29fa6ae64a377c8f4 not found: ID does not exist" Nov 23 09:47:41 crc kubenswrapper[5028]: I1123 09:47:41.080001 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95973902-bbab-4661-8f7e-a23332f57df5" path="/var/lib/kubelet/pods/95973902-bbab-4661-8f7e-a23332f57df5/volumes" Nov 23 09:47:42 crc kubenswrapper[5028]: I1123 09:47:42.053799 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:47:42 crc kubenswrapper[5028]: I1123 09:47:42.525555 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"31d553352240adb2b582a119d8de995a096f47dd6548fe5aa1f32a6f0107990d"} Nov 23 09:50:00 crc kubenswrapper[5028]: I1123 09:50:00.947422 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:50:00 crc kubenswrapper[5028]: I1123 09:50:00.948032 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:50:30 crc kubenswrapper[5028]: I1123 09:50:30.946540 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:50:30 crc kubenswrapper[5028]: I1123 09:50:30.947438 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:51:00 crc kubenswrapper[5028]: I1123 09:51:00.947097 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:51:00 crc kubenswrapper[5028]: I1123 09:51:00.947866 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:51:00 crc kubenswrapper[5028]: I1123 09:51:00.947973 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:51:00 crc kubenswrapper[5028]: I1123 09:51:00.949395 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31d553352240adb2b582a119d8de995a096f47dd6548fe5aa1f32a6f0107990d"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:51:00 crc kubenswrapper[5028]: I1123 09:51:00.949526 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://31d553352240adb2b582a119d8de995a096f47dd6548fe5aa1f32a6f0107990d" gracePeriod=600 Nov 23 09:51:01 crc kubenswrapper[5028]: I1123 09:51:01.669632 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="31d553352240adb2b582a119d8de995a096f47dd6548fe5aa1f32a6f0107990d" exitCode=0 Nov 23 09:51:01 crc kubenswrapper[5028]: I1123 09:51:01.669711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"31d553352240adb2b582a119d8de995a096f47dd6548fe5aa1f32a6f0107990d"} Nov 23 09:51:01 crc kubenswrapper[5028]: I1123 09:51:01.670267 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47"} Nov 23 09:51:01 crc kubenswrapper[5028]: I1123 09:51:01.670296 5028 scope.go:117] "RemoveContainer" containerID="e9292039f1dbac1c7ae902790e9335ecd9b131f2f5d78460c3ff6c20e70379c3" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.745148 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vxk4z"] Nov 23 09:51:09 crc kubenswrapper[5028]: E1123 09:51:09.749422 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="registry-server" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.749659 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="registry-server" Nov 23 09:51:09 crc kubenswrapper[5028]: E1123 09:51:09.749864 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="extract-utilities" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.750051 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="extract-utilities" Nov 23 09:51:09 crc kubenswrapper[5028]: E1123 09:51:09.750273 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="extract-content" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.750420 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="extract-content" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.751089 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="95973902-bbab-4661-8f7e-a23332f57df5" containerName="registry-server" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.755842 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.764246 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vxk4z"] Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.832224 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-catalog-content\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.832286 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-kube-api-access-qblhg\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.832753 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-utilities\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.935580 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-catalog-content\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.935642 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-kube-api-access-qblhg\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.935758 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-utilities\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.936379 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-catalog-content\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.936402 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-utilities\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:09 crc kubenswrapper[5028]: I1123 09:51:09.983386 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-kube-api-access-qblhg\") pod \"redhat-operators-vxk4z\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:10 crc kubenswrapper[5028]: I1123 09:51:10.100401 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:10 crc kubenswrapper[5028]: I1123 09:51:10.647216 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vxk4z"] Nov 23 09:51:10 crc kubenswrapper[5028]: I1123 09:51:10.794766 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerStarted","Data":"562f7eb35b0cfff579037810c500e3be7765adf45a251293ae7f0a21e7958835"} Nov 23 09:51:11 crc kubenswrapper[5028]: I1123 09:51:11.810799 5028 generic.go:334] "Generic (PLEG): container finished" podID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerID="98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8" exitCode=0 Nov 23 09:51:11 crc kubenswrapper[5028]: I1123 09:51:11.810930 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerDied","Data":"98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8"} Nov 23 09:51:11 crc kubenswrapper[5028]: I1123 09:51:11.813811 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:51:12 crc kubenswrapper[5028]: I1123 09:51:12.830783 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerStarted","Data":"805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6"} Nov 23 09:51:13 crc kubenswrapper[5028]: I1123 09:51:13.886494 5028 generic.go:334] "Generic (PLEG): container finished" podID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerID="805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6" exitCode=0 Nov 23 09:51:13 crc kubenswrapper[5028]: I1123 09:51:13.886612 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerDied","Data":"805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6"} Nov 23 09:51:14 crc kubenswrapper[5028]: I1123 09:51:14.917838 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerStarted","Data":"bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d"} Nov 23 09:51:14 crc kubenswrapper[5028]: I1123 09:51:14.955143 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vxk4z" podStartSLOduration=3.4531972 podStartE2EDuration="5.953935283s" podCreationTimestamp="2025-11-23 09:51:09 +0000 UTC" firstStartedPulling="2025-11-23 09:51:11.813574825 +0000 UTC m=+10855.510979604" lastFinishedPulling="2025-11-23 09:51:14.314312868 +0000 UTC m=+10858.011717687" observedRunningTime="2025-11-23 09:51:14.937629597 +0000 UTC m=+10858.635034406" watchObservedRunningTime="2025-11-23 09:51:14.953935283 +0000 UTC m=+10858.651340062" Nov 23 09:51:20 crc kubenswrapper[5028]: I1123 09:51:20.100939 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:20 crc kubenswrapper[5028]: I1123 09:51:20.101475 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:21 crc kubenswrapper[5028]: I1123 09:51:21.161461 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vxk4z" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="registry-server" probeResult="failure" output=< Nov 23 09:51:21 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:51:21 crc kubenswrapper[5028]: > Nov 23 09:51:30 crc kubenswrapper[5028]: I1123 09:51:30.213023 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:30 crc kubenswrapper[5028]: I1123 09:51:30.296172 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:30 crc kubenswrapper[5028]: I1123 09:51:30.479923 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vxk4z"] Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.160815 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vxk4z" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="registry-server" containerID="cri-o://bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d" gracePeriod=2 Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.778635 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.896819 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-utilities\") pod \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.896996 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-catalog-content\") pod \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.897126 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-kube-api-access-qblhg\") pod \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\" (UID: \"930b4f74-a5bd-4553-ab1e-82ecf8356dc0\") " Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.897991 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-utilities" (OuterVolumeSpecName: "utilities") pod "930b4f74-a5bd-4553-ab1e-82ecf8356dc0" (UID: "930b4f74-a5bd-4553-ab1e-82ecf8356dc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.925172 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-kube-api-access-qblhg" (OuterVolumeSpecName: "kube-api-access-qblhg") pod "930b4f74-a5bd-4553-ab1e-82ecf8356dc0" (UID: "930b4f74-a5bd-4553-ab1e-82ecf8356dc0"). InnerVolumeSpecName "kube-api-access-qblhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:51:32 crc kubenswrapper[5028]: I1123 09:51:32.987288 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "930b4f74-a5bd-4553-ab1e-82ecf8356dc0" (UID: "930b4f74-a5bd-4553-ab1e-82ecf8356dc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.001239 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.001295 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.001322 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qblhg\" (UniqueName: \"kubernetes.io/projected/930b4f74-a5bd-4553-ab1e-82ecf8356dc0-kube-api-access-qblhg\") on node \"crc\" DevicePath \"\"" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.190093 5028 generic.go:334] "Generic (PLEG): container finished" podID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerID="bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d" exitCode=0 Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.190183 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vxk4z" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.190213 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerDied","Data":"bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d"} Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.190676 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vxk4z" event={"ID":"930b4f74-a5bd-4553-ab1e-82ecf8356dc0","Type":"ContainerDied","Data":"562f7eb35b0cfff579037810c500e3be7765adf45a251293ae7f0a21e7958835"} Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.190709 5028 scope.go:117] "RemoveContainer" containerID="bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d" Nov 23 09:51:33 crc kubenswrapper[5028]: E1123 09:51:33.201996 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod930b4f74_a5bd_4553_ab1e_82ecf8356dc0.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod930b4f74_a5bd_4553_ab1e_82ecf8356dc0.slice/crio-562f7eb35b0cfff579037810c500e3be7765adf45a251293ae7f0a21e7958835\": RecentStats: unable to find data in memory cache]" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.224634 5028 scope.go:117] "RemoveContainer" containerID="805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.227679 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vxk4z"] Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.247798 5028 scope.go:117] "RemoveContainer" containerID="98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.249134 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vxk4z"] Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.293635 5028 scope.go:117] "RemoveContainer" containerID="bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d" Nov 23 09:51:33 crc kubenswrapper[5028]: E1123 09:51:33.294057 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d\": container with ID starting with bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d not found: ID does not exist" containerID="bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.294098 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d"} err="failed to get container status \"bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d\": rpc error: code = NotFound desc = could not find container \"bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d\": container with ID starting with bd033e7d44ad73919ca2dbdc68e8e56e5f794590808ce6b1f17ae4d80c92637d not found: ID does not exist" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.294138 5028 scope.go:117] "RemoveContainer" containerID="805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6" Nov 23 09:51:33 crc kubenswrapper[5028]: E1123 09:51:33.294508 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6\": container with ID starting with 805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6 not found: ID does not exist" containerID="805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.294553 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6"} err="failed to get container status \"805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6\": rpc error: code = NotFound desc = could not find container \"805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6\": container with ID starting with 805ad43ccf08c6fb6ca523ef0e248fe70f4f44af2cf5116587a83ab00c76cbd6 not found: ID does not exist" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.294588 5028 scope.go:117] "RemoveContainer" containerID="98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8" Nov 23 09:51:33 crc kubenswrapper[5028]: E1123 09:51:33.294933 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8\": container with ID starting with 98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8 not found: ID does not exist" containerID="98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8" Nov 23 09:51:33 crc kubenswrapper[5028]: I1123 09:51:33.294995 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8"} err="failed to get container status \"98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8\": rpc error: code = NotFound desc = could not find container \"98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8\": container with ID starting with 98a0b5b99e1ccaa1874c3bc165f8f6deadf8fe55ba64f675e5943037122195f8 not found: ID does not exist" Nov 23 09:51:35 crc kubenswrapper[5028]: I1123 09:51:35.077490 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" path="/var/lib/kubelet/pods/930b4f74-a5bd-4553-ab1e-82ecf8356dc0/volumes" Nov 23 09:53:30 crc kubenswrapper[5028]: I1123 09:53:30.947062 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:53:30 crc kubenswrapper[5028]: I1123 09:53:30.947772 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:54:00 crc kubenswrapper[5028]: I1123 09:54:00.946857 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:54:00 crc kubenswrapper[5028]: I1123 09:54:00.947720 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:54:30 crc kubenswrapper[5028]: I1123 09:54:30.946332 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 09:54:30 crc kubenswrapper[5028]: I1123 09:54:30.947102 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 09:54:30 crc kubenswrapper[5028]: I1123 09:54:30.947192 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 09:54:30 crc kubenswrapper[5028]: I1123 09:54:30.948606 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 09:54:30 crc kubenswrapper[5028]: I1123 09:54:30.948713 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" gracePeriod=600 Nov 23 09:54:31 crc kubenswrapper[5028]: E1123 09:54:31.080822 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:54:31 crc kubenswrapper[5028]: I1123 09:54:31.898298 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" exitCode=0 Nov 23 09:54:31 crc kubenswrapper[5028]: I1123 09:54:31.898384 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47"} Nov 23 09:54:31 crc kubenswrapper[5028]: I1123 09:54:31.898781 5028 scope.go:117] "RemoveContainer" containerID="31d553352240adb2b582a119d8de995a096f47dd6548fe5aa1f32a6f0107990d" Nov 23 09:54:31 crc kubenswrapper[5028]: I1123 09:54:31.899530 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:54:31 crc kubenswrapper[5028]: E1123 09:54:31.900339 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:54:47 crc kubenswrapper[5028]: I1123 09:54:47.062862 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:54:47 crc kubenswrapper[5028]: E1123 09:54:47.063901 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:55:00 crc kubenswrapper[5028]: I1123 09:55:00.054463 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:55:00 crc kubenswrapper[5028]: E1123 09:55:00.055805 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:55:11 crc kubenswrapper[5028]: I1123 09:55:11.053644 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:55:11 crc kubenswrapper[5028]: E1123 09:55:11.054899 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:55:25 crc kubenswrapper[5028]: I1123 09:55:25.055212 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:55:25 crc kubenswrapper[5028]: E1123 09:55:25.056633 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.197277 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srppp"] Nov 23 09:55:32 crc kubenswrapper[5028]: E1123 09:55:32.198351 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="registry-server" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.198366 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="registry-server" Nov 23 09:55:32 crc kubenswrapper[5028]: E1123 09:55:32.198396 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="extract-utilities" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.198403 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="extract-utilities" Nov 23 09:55:32 crc kubenswrapper[5028]: E1123 09:55:32.198425 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="extract-content" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.198431 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="extract-content" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.198667 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="930b4f74-a5bd-4553-ab1e-82ecf8356dc0" containerName="registry-server" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.200418 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.220351 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srppp"] Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.264663 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-catalog-content\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.264727 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mlhl\" (UniqueName: \"kubernetes.io/projected/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-kube-api-access-9mlhl\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.265087 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-utilities\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.367479 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-utilities\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.367569 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-catalog-content\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.367594 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mlhl\" (UniqueName: \"kubernetes.io/projected/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-kube-api-access-9mlhl\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.368075 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-utilities\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.368082 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-catalog-content\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.396806 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mlhl\" (UniqueName: \"kubernetes.io/projected/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-kube-api-access-9mlhl\") pod \"redhat-marketplace-srppp\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.521914 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:32 crc kubenswrapper[5028]: I1123 09:55:32.992268 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srppp"] Nov 23 09:55:33 crc kubenswrapper[5028]: I1123 09:55:33.919393 5028 generic.go:334] "Generic (PLEG): container finished" podID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerID="572d3a54ab374bf4fece2154cd180337931113ce098a680cb4c853c7afd49dfe" exitCode=0 Nov 23 09:55:33 crc kubenswrapper[5028]: I1123 09:55:33.919479 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerDied","Data":"572d3a54ab374bf4fece2154cd180337931113ce098a680cb4c853c7afd49dfe"} Nov 23 09:55:33 crc kubenswrapper[5028]: I1123 09:55:33.919703 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerStarted","Data":"0887e7e2f375ee3bc828eae4583399af1ed53a19669574f2c7f85a96e30399f8"} Nov 23 09:55:34 crc kubenswrapper[5028]: I1123 09:55:34.936464 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerStarted","Data":"54ed57d18a0ae17b1684dab3e9d385e2435295bc4208eb881323302d2732dfa4"} Nov 23 09:55:35 crc kubenswrapper[5028]: I1123 09:55:35.968040 5028 generic.go:334] "Generic (PLEG): container finished" podID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerID="54ed57d18a0ae17b1684dab3e9d385e2435295bc4208eb881323302d2732dfa4" exitCode=0 Nov 23 09:55:35 crc kubenswrapper[5028]: I1123 09:55:35.968104 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerDied","Data":"54ed57d18a0ae17b1684dab3e9d385e2435295bc4208eb881323302d2732dfa4"} Nov 23 09:55:37 crc kubenswrapper[5028]: I1123 09:55:37.003468 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerStarted","Data":"9686e14204693d25833830512b7d4aebee98df17fa6e0bc8b302fe44ebcd0ace"} Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.053671 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:55:38 crc kubenswrapper[5028]: E1123 09:55:38.054672 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.404087 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srppp" podStartSLOduration=3.911924838 podStartE2EDuration="6.403938719s" podCreationTimestamp="2025-11-23 09:55:32 +0000 UTC" firstStartedPulling="2025-11-23 09:55:33.922856551 +0000 UTC m=+11117.620261340" lastFinishedPulling="2025-11-23 09:55:36.414870432 +0000 UTC m=+11120.112275221" observedRunningTime="2025-11-23 09:55:37.034558197 +0000 UTC m=+11120.731963006" watchObservedRunningTime="2025-11-23 09:55:38.403938719 +0000 UTC m=+11122.101343498" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.407288 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.409016 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.414208 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.414248 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jz44m" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.414630 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.414703 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.422032 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553667 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553725 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553757 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-config-data\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553784 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553812 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5drj\" (UniqueName: \"kubernetes.io/projected/621da467-543c-4ecf-80cc-fa2bb98d7a68-kube-api-access-g5drj\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553842 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553864 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553903 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.553940 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655625 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655693 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655719 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-config-data\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655744 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655789 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5drj\" (UniqueName: \"kubernetes.io/projected/621da467-543c-4ecf-80cc-fa2bb98d7a68-kube-api-access-g5drj\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655815 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655839 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655884 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.655913 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.656483 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.656725 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.657422 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-config-data\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.658182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.659311 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.665907 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.671833 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.678784 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.682018 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5drj\" (UniqueName: \"kubernetes.io/projected/621da467-543c-4ecf-80cc-fa2bb98d7a68-kube-api-access-g5drj\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.709270 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " pod="openstack/tempest-tests-tempest" Nov 23 09:55:38 crc kubenswrapper[5028]: I1123 09:55:38.785456 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 09:55:39 crc kubenswrapper[5028]: I1123 09:55:39.273440 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 23 09:55:39 crc kubenswrapper[5028]: W1123 09:55:39.275232 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621da467_543c_4ecf_80cc_fa2bb98d7a68.slice/crio-4707f7d39091a7116683d3c97a6b659305bb23b14aca695f6c63e308f8f03167 WatchSource:0}: Error finding container 4707f7d39091a7116683d3c97a6b659305bb23b14aca695f6c63e308f8f03167: Status 404 returned error can't find the container with id 4707f7d39091a7116683d3c97a6b659305bb23b14aca695f6c63e308f8f03167 Nov 23 09:55:40 crc kubenswrapper[5028]: I1123 09:55:40.042753 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"621da467-543c-4ecf-80cc-fa2bb98d7a68","Type":"ContainerStarted","Data":"4707f7d39091a7116683d3c97a6b659305bb23b14aca695f6c63e308f8f03167"} Nov 23 09:55:42 crc kubenswrapper[5028]: I1123 09:55:42.522582 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:42 crc kubenswrapper[5028]: I1123 09:55:42.523140 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:42 crc kubenswrapper[5028]: I1123 09:55:42.591095 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:43 crc kubenswrapper[5028]: I1123 09:55:43.146663 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:43 crc kubenswrapper[5028]: I1123 09:55:43.205785 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srppp"] Nov 23 09:55:45 crc kubenswrapper[5028]: I1123 09:55:45.113343 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srppp" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="registry-server" containerID="cri-o://9686e14204693d25833830512b7d4aebee98df17fa6e0bc8b302fe44ebcd0ace" gracePeriod=2 Nov 23 09:55:46 crc kubenswrapper[5028]: I1123 09:55:46.156175 5028 generic.go:334] "Generic (PLEG): container finished" podID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerID="9686e14204693d25833830512b7d4aebee98df17fa6e0bc8b302fe44ebcd0ace" exitCode=0 Nov 23 09:55:46 crc kubenswrapper[5028]: I1123 09:55:46.157235 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerDied","Data":"9686e14204693d25833830512b7d4aebee98df17fa6e0bc8b302fe44ebcd0ace"} Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.777760 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.866561 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-catalog-content\") pod \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.866715 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mlhl\" (UniqueName: \"kubernetes.io/projected/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-kube-api-access-9mlhl\") pod \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.867244 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-utilities\") pod \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\" (UID: \"860355c8-bf2a-43ec-8da3-6bd041f8e1b3\") " Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.867861 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-utilities" (OuterVolumeSpecName: "utilities") pod "860355c8-bf2a-43ec-8da3-6bd041f8e1b3" (UID: "860355c8-bf2a-43ec-8da3-6bd041f8e1b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.868823 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.883258 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-kube-api-access-9mlhl" (OuterVolumeSpecName: "kube-api-access-9mlhl") pod "860355c8-bf2a-43ec-8da3-6bd041f8e1b3" (UID: "860355c8-bf2a-43ec-8da3-6bd041f8e1b3"). InnerVolumeSpecName "kube-api-access-9mlhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.887839 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "860355c8-bf2a-43ec-8da3-6bd041f8e1b3" (UID: "860355c8-bf2a-43ec-8da3-6bd041f8e1b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.970521 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:55:49 crc kubenswrapper[5028]: I1123 09:55:49.970555 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mlhl\" (UniqueName: \"kubernetes.io/projected/860355c8-bf2a-43ec-8da3-6bd041f8e1b3-kube-api-access-9mlhl\") on node \"crc\" DevicePath \"\"" Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.228374 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srppp" event={"ID":"860355c8-bf2a-43ec-8da3-6bd041f8e1b3","Type":"ContainerDied","Data":"0887e7e2f375ee3bc828eae4583399af1ed53a19669574f2c7f85a96e30399f8"} Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.228446 5028 scope.go:117] "RemoveContainer" containerID="9686e14204693d25833830512b7d4aebee98df17fa6e0bc8b302fe44ebcd0ace" Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.228479 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srppp" Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.267347 5028 scope.go:117] "RemoveContainer" containerID="54ed57d18a0ae17b1684dab3e9d385e2435295bc4208eb881323302d2732dfa4" Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.287644 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srppp"] Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.298923 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srppp"] Nov 23 09:55:50 crc kubenswrapper[5028]: I1123 09:55:50.313255 5028 scope.go:117] "RemoveContainer" containerID="572d3a54ab374bf4fece2154cd180337931113ce098a680cb4c853c7afd49dfe" Nov 23 09:55:51 crc kubenswrapper[5028]: I1123 09:55:51.054220 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:55:51 crc kubenswrapper[5028]: E1123 09:55:51.055095 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:55:51 crc kubenswrapper[5028]: I1123 09:55:51.069577 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" path="/var/lib/kubelet/pods/860355c8-bf2a-43ec-8da3-6bd041f8e1b3/volumes" Nov 23 09:56:05 crc kubenswrapper[5028]: I1123 09:56:05.054188 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:56:05 crc kubenswrapper[5028]: E1123 09:56:05.055361 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:56:17 crc kubenswrapper[5028]: I1123 09:56:17.064237 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:56:17 crc kubenswrapper[5028]: E1123 09:56:17.065709 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:56:22 crc kubenswrapper[5028]: E1123 09:56:22.860695 5028 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 09:56:22 crc kubenswrapper[5028]: E1123 09:56:22.861604 5028 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af" Nov 23 09:56:22 crc kubenswrapper[5028]: E1123 09:56:22.861842 5028 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5drj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(621da467-543c-4ecf-80cc-fa2bb98d7a68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 23 09:56:22 crc kubenswrapper[5028]: E1123 09:56:22.863305 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="621da467-543c-4ecf-80cc-fa2bb98d7a68" Nov 23 09:56:23 crc kubenswrapper[5028]: E1123 09:56:23.731928 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:8e43c662a6abf8c9a07ada252f8dc6af\\\"\"" pod="openstack/tempest-tests-tempest" podUID="621da467-543c-4ecf-80cc-fa2bb98d7a68" Nov 23 09:56:32 crc kubenswrapper[5028]: I1123 09:56:32.053936 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:56:32 crc kubenswrapper[5028]: E1123 09:56:32.055260 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:56:34 crc kubenswrapper[5028]: I1123 09:56:34.058421 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 09:56:34 crc kubenswrapper[5028]: I1123 09:56:34.330997 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 23 09:56:35 crc kubenswrapper[5028]: I1123 09:56:35.932011 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"621da467-543c-4ecf-80cc-fa2bb98d7a68","Type":"ContainerStarted","Data":"c6f46c437b5dbee299b078ec01d6e8fcfdf1d6e4dbda64adce9c76adce05ea96"} Nov 23 09:56:35 crc kubenswrapper[5028]: I1123 09:56:35.972876 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.926730728 podStartE2EDuration="58.97285459s" podCreationTimestamp="2025-11-23 09:55:37 +0000 UTC" firstStartedPulling="2025-11-23 09:55:39.279127407 +0000 UTC m=+11122.976532196" lastFinishedPulling="2025-11-23 09:56:34.325251249 +0000 UTC m=+11178.022656058" observedRunningTime="2025-11-23 09:56:35.954715658 +0000 UTC m=+11179.652120447" watchObservedRunningTime="2025-11-23 09:56:35.97285459 +0000 UTC m=+11179.670259379" Nov 23 09:56:45 crc kubenswrapper[5028]: I1123 09:56:45.054462 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:56:45 crc kubenswrapper[5028]: E1123 09:56:45.055525 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:56:58 crc kubenswrapper[5028]: I1123 09:56:58.054255 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:56:58 crc kubenswrapper[5028]: E1123 09:56:58.055369 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:57:11 crc kubenswrapper[5028]: I1123 09:57:11.053550 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:57:11 crc kubenswrapper[5028]: E1123 09:57:11.054514 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:57:22 crc kubenswrapper[5028]: I1123 09:57:22.053446 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:57:22 crc kubenswrapper[5028]: E1123 09:57:22.054613 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.442143 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m6j2m"] Nov 23 09:57:35 crc kubenswrapper[5028]: E1123 09:57:35.443118 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="extract-utilities" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.443132 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="extract-utilities" Nov 23 09:57:35 crc kubenswrapper[5028]: E1123 09:57:35.443183 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="extract-content" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.443190 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="extract-content" Nov 23 09:57:35 crc kubenswrapper[5028]: E1123 09:57:35.443207 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="registry-server" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.443220 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="registry-server" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.443423 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="860355c8-bf2a-43ec-8da3-6bd041f8e1b3" containerName="registry-server" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.445198 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.467814 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m6j2m"] Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.481833 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrz7\" (UniqueName: \"kubernetes.io/projected/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-kube-api-access-rjrz7\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.482257 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-catalog-content\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.482713 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-utilities\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.585138 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-utilities\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.585243 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrz7\" (UniqueName: \"kubernetes.io/projected/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-kube-api-access-rjrz7\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.585324 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-catalog-content\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.585985 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-catalog-content\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.586301 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-utilities\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.610820 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrz7\" (UniqueName: \"kubernetes.io/projected/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-kube-api-access-rjrz7\") pod \"community-operators-m6j2m\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:35 crc kubenswrapper[5028]: I1123 09:57:35.767941 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:36 crc kubenswrapper[5028]: I1123 09:57:36.376265 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m6j2m"] Nov 23 09:57:36 crc kubenswrapper[5028]: I1123 09:57:36.769723 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerID="738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98" exitCode=0 Nov 23 09:57:36 crc kubenswrapper[5028]: I1123 09:57:36.769775 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerDied","Data":"738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98"} Nov 23 09:57:36 crc kubenswrapper[5028]: I1123 09:57:36.769805 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerStarted","Data":"0cb5a6188a196cc20f03bd460e8dbd310766139392b631983cf6ce7364ee073f"} Nov 23 09:57:37 crc kubenswrapper[5028]: I1123 09:57:37.066382 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:57:37 crc kubenswrapper[5028]: E1123 09:57:37.066966 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:57:37 crc kubenswrapper[5028]: I1123 09:57:37.795159 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerStarted","Data":"1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5"} Nov 23 09:57:39 crc kubenswrapper[5028]: I1123 09:57:39.821649 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerID="1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5" exitCode=0 Nov 23 09:57:39 crc kubenswrapper[5028]: I1123 09:57:39.821777 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerDied","Data":"1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5"} Nov 23 09:57:40 crc kubenswrapper[5028]: I1123 09:57:40.836631 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerStarted","Data":"2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57"} Nov 23 09:57:40 crc kubenswrapper[5028]: I1123 09:57:40.861064 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m6j2m" podStartSLOduration=2.413358998 podStartE2EDuration="5.861044484s" podCreationTimestamp="2025-11-23 09:57:35 +0000 UTC" firstStartedPulling="2025-11-23 09:57:36.77232371 +0000 UTC m=+11240.469728499" lastFinishedPulling="2025-11-23 09:57:40.220009196 +0000 UTC m=+11243.917413985" observedRunningTime="2025-11-23 09:57:40.858779347 +0000 UTC m=+11244.556184136" watchObservedRunningTime="2025-11-23 09:57:40.861044484 +0000 UTC m=+11244.558449263" Nov 23 09:57:45 crc kubenswrapper[5028]: I1123 09:57:45.768680 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:45 crc kubenswrapper[5028]: I1123 09:57:45.769284 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:46 crc kubenswrapper[5028]: I1123 09:57:46.837520 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m6j2m" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="registry-server" probeResult="failure" output=< Nov 23 09:57:46 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 09:57:46 crc kubenswrapper[5028]: > Nov 23 09:57:52 crc kubenswrapper[5028]: I1123 09:57:52.053076 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:57:52 crc kubenswrapper[5028]: E1123 09:57:52.054972 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:57:55 crc kubenswrapper[5028]: I1123 09:57:55.832597 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:55 crc kubenswrapper[5028]: I1123 09:57:55.899360 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:56 crc kubenswrapper[5028]: I1123 09:57:56.077394 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m6j2m"] Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.022965 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m6j2m" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="registry-server" containerID="cri-o://2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57" gracePeriod=2 Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.699638 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.862859 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-utilities\") pod \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.862928 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjrz7\" (UniqueName: \"kubernetes.io/projected/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-kube-api-access-rjrz7\") pod \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.863245 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-catalog-content\") pod \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\" (UID: \"cc4039d1-5cff-4c9b-aed6-1952ab5174cc\") " Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.864059 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-utilities" (OuterVolumeSpecName: "utilities") pod "cc4039d1-5cff-4c9b-aed6-1952ab5174cc" (UID: "cc4039d1-5cff-4c9b-aed6-1952ab5174cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.870146 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-kube-api-access-rjrz7" (OuterVolumeSpecName: "kube-api-access-rjrz7") pod "cc4039d1-5cff-4c9b-aed6-1952ab5174cc" (UID: "cc4039d1-5cff-4c9b-aed6-1952ab5174cc"). InnerVolumeSpecName "kube-api-access-rjrz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.929475 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc4039d1-5cff-4c9b-aed6-1952ab5174cc" (UID: "cc4039d1-5cff-4c9b-aed6-1952ab5174cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.965864 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.965905 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 09:57:57 crc kubenswrapper[5028]: I1123 09:57:57.965916 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjrz7\" (UniqueName: \"kubernetes.io/projected/cc4039d1-5cff-4c9b-aed6-1952ab5174cc-kube-api-access-rjrz7\") on node \"crc\" DevicePath \"\"" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.034314 5028 generic.go:334] "Generic (PLEG): container finished" podID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerID="2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57" exitCode=0 Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.034369 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerDied","Data":"2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57"} Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.034404 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m6j2m" event={"ID":"cc4039d1-5cff-4c9b-aed6-1952ab5174cc","Type":"ContainerDied","Data":"0cb5a6188a196cc20f03bd460e8dbd310766139392b631983cf6ce7364ee073f"} Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.034422 5028 scope.go:117] "RemoveContainer" containerID="2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.034587 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m6j2m" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.061143 5028 scope.go:117] "RemoveContainer" containerID="1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.079003 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m6j2m"] Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.086163 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m6j2m"] Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.102145 5028 scope.go:117] "RemoveContainer" containerID="738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.136756 5028 scope.go:117] "RemoveContainer" containerID="2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57" Nov 23 09:57:58 crc kubenswrapper[5028]: E1123 09:57:58.137514 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57\": container with ID starting with 2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57 not found: ID does not exist" containerID="2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.137575 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57"} err="failed to get container status \"2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57\": rpc error: code = NotFound desc = could not find container \"2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57\": container with ID starting with 2ffedc8b65cd2370952edf4e33ab14c9e134450af6205712c80b3f9e023b0f57 not found: ID does not exist" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.137608 5028 scope.go:117] "RemoveContainer" containerID="1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5" Nov 23 09:57:58 crc kubenswrapper[5028]: E1123 09:57:58.137993 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5\": container with ID starting with 1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5 not found: ID does not exist" containerID="1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.138028 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5"} err="failed to get container status \"1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5\": rpc error: code = NotFound desc = could not find container \"1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5\": container with ID starting with 1c50e3b8b0328ff70256d815a9b28c82a73aed3287b566bba3e897c4101947a5 not found: ID does not exist" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.138051 5028 scope.go:117] "RemoveContainer" containerID="738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98" Nov 23 09:57:58 crc kubenswrapper[5028]: E1123 09:57:58.138276 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98\": container with ID starting with 738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98 not found: ID does not exist" containerID="738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98" Nov 23 09:57:58 crc kubenswrapper[5028]: I1123 09:57:58.138307 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98"} err="failed to get container status \"738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98\": rpc error: code = NotFound desc = could not find container \"738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98\": container with ID starting with 738528639fb3fbe6e2647a7a593e026f29bb70b52a59d7d2ed8d1c2ed47ccb98 not found: ID does not exist" Nov 23 09:57:59 crc kubenswrapper[5028]: I1123 09:57:59.064017 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" path="/var/lib/kubelet/pods/cc4039d1-5cff-4c9b-aed6-1952ab5174cc/volumes" Nov 23 09:58:07 crc kubenswrapper[5028]: I1123 09:58:07.065782 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:58:07 crc kubenswrapper[5028]: E1123 09:58:07.067160 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:58:20 crc kubenswrapper[5028]: I1123 09:58:20.053463 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:58:20 crc kubenswrapper[5028]: E1123 09:58:20.054200 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:58:34 crc kubenswrapper[5028]: I1123 09:58:34.053430 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:58:34 crc kubenswrapper[5028]: E1123 09:58:34.054393 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:58:46 crc kubenswrapper[5028]: I1123 09:58:46.053596 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:58:46 crc kubenswrapper[5028]: E1123 09:58:46.055024 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:59:01 crc kubenswrapper[5028]: I1123 09:59:01.053855 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:59:01 crc kubenswrapper[5028]: E1123 09:59:01.054597 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:59:15 crc kubenswrapper[5028]: I1123 09:59:15.053799 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:59:15 crc kubenswrapper[5028]: E1123 09:59:15.055047 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:59:26 crc kubenswrapper[5028]: I1123 09:59:26.053178 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:59:26 crc kubenswrapper[5028]: E1123 09:59:26.053904 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 09:59:41 crc kubenswrapper[5028]: I1123 09:59:41.054115 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 09:59:41 crc kubenswrapper[5028]: I1123 09:59:41.898734 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"7d5303c4e4eab7807ae81bdd8c35a4de44f9bf3ce08b3f35c4d5ab32e27ac71a"} Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.166051 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l"] Nov 23 10:00:00 crc kubenswrapper[5028]: E1123 10:00:00.167414 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="extract-utilities" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.167435 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="extract-utilities" Nov 23 10:00:00 crc kubenswrapper[5028]: E1123 10:00:00.167461 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="extract-content" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.167471 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="extract-content" Nov 23 10:00:00 crc kubenswrapper[5028]: E1123 10:00:00.167495 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="registry-server" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.167503 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="registry-server" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.168614 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4039d1-5cff-4c9b-aed6-1952ab5174cc" containerName="registry-server" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.170340 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.173626 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.188064 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l"] Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.173993 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.213916 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq66l\" (UniqueName: \"kubernetes.io/projected/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-kube-api-access-mq66l\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.214075 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-secret-volume\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.214107 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-config-volume\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.329888 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-secret-volume\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.330075 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-config-volume\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.330524 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq66l\" (UniqueName: \"kubernetes.io/projected/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-kube-api-access-mq66l\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.334567 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-config-volume\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.354697 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq66l\" (UniqueName: \"kubernetes.io/projected/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-kube-api-access-mq66l\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.355446 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-secret-volume\") pod \"collect-profiles-29398200-bc65l\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:00 crc kubenswrapper[5028]: I1123 10:00:00.546656 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:01 crc kubenswrapper[5028]: W1123 10:00:01.068892 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3cddbd0_b7db_42c7_81a4_d4f7a903af6a.slice/crio-be48d07396c27b60b0ba8d8969b52fb408e6c64b4a45c8f67ea102e923918e2d WatchSource:0}: Error finding container be48d07396c27b60b0ba8d8969b52fb408e6c64b4a45c8f67ea102e923918e2d: Status 404 returned error can't find the container with id be48d07396c27b60b0ba8d8969b52fb408e6c64b4a45c8f67ea102e923918e2d Nov 23 10:00:01 crc kubenswrapper[5028]: I1123 10:00:01.071689 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l"] Nov 23 10:00:01 crc kubenswrapper[5028]: I1123 10:00:01.117422 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" event={"ID":"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a","Type":"ContainerStarted","Data":"be48d07396c27b60b0ba8d8969b52fb408e6c64b4a45c8f67ea102e923918e2d"} Nov 23 10:00:02 crc kubenswrapper[5028]: I1123 10:00:02.130905 5028 generic.go:334] "Generic (PLEG): container finished" podID="e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" containerID="b0977973a13547f34276a587b7cfca7bbc6c9f583dde0aca1e078c731616d244" exitCode=0 Nov 23 10:00:02 crc kubenswrapper[5028]: I1123 10:00:02.130999 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" event={"ID":"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a","Type":"ContainerDied","Data":"b0977973a13547f34276a587b7cfca7bbc6c9f583dde0aca1e078c731616d244"} Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.746578 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.939848 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-config-volume\") pod \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.940050 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq66l\" (UniqueName: \"kubernetes.io/projected/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-kube-api-access-mq66l\") pod \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.940128 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-secret-volume\") pod \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\" (UID: \"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a\") " Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.940745 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" (UID: "e3cddbd0-b7db-42c7-81a4-d4f7a903af6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.941716 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.946593 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" (UID: "e3cddbd0-b7db-42c7-81a4-d4f7a903af6a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:00:03 crc kubenswrapper[5028]: I1123 10:00:03.950494 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-kube-api-access-mq66l" (OuterVolumeSpecName: "kube-api-access-mq66l") pod "e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" (UID: "e3cddbd0-b7db-42c7-81a4-d4f7a903af6a"). InnerVolumeSpecName "kube-api-access-mq66l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.044462 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq66l\" (UniqueName: \"kubernetes.io/projected/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-kube-api-access-mq66l\") on node \"crc\" DevicePath \"\"" Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.044496 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3cddbd0-b7db-42c7-81a4-d4f7a903af6a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.163380 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" event={"ID":"e3cddbd0-b7db-42c7-81a4-d4f7a903af6a","Type":"ContainerDied","Data":"be48d07396c27b60b0ba8d8969b52fb408e6c64b4a45c8f67ea102e923918e2d"} Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.163423 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be48d07396c27b60b0ba8d8969b52fb408e6c64b4a45c8f67ea102e923918e2d" Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.163491 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398200-bc65l" Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.859463 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd"] Nov 23 10:00:04 crc kubenswrapper[5028]: I1123 10:00:04.869119 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398155-ln9hd"] Nov 23 10:00:05 crc kubenswrapper[5028]: I1123 10:00:05.065516 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55893335-2a6d-4bfa-b107-71c03cec23bb" path="/var/lib/kubelet/pods/55893335-2a6d-4bfa-b107-71c03cec23bb/volumes" Nov 23 10:00:34 crc kubenswrapper[5028]: I1123 10:00:34.363813 5028 scope.go:117] "RemoveContainer" containerID="484243187afa840a27e7c74f47fe168592a463812b2b19cbe9a51d608feb7c35" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.183012 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29398201-8hbgz"] Nov 23 10:01:00 crc kubenswrapper[5028]: E1123 10:01:00.187521 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" containerName="collect-profiles" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.187545 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" containerName="collect-profiles" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.187736 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cddbd0-b7db-42c7-81a4-d4f7a903af6a" containerName="collect-profiles" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.188661 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.199573 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398201-8hbgz"] Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.269053 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-combined-ca-bundle\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.269566 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9bsn\" (UniqueName: \"kubernetes.io/projected/7d03980b-1bc7-40e6-890f-e7412777f302-kube-api-access-k9bsn\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.269809 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-config-data\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.269922 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-fernet-keys\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.371893 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-combined-ca-bundle\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.372414 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9bsn\" (UniqueName: \"kubernetes.io/projected/7d03980b-1bc7-40e6-890f-e7412777f302-kube-api-access-k9bsn\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.372561 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-config-data\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.372705 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-fernet-keys\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.381687 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-config-data\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.381931 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-combined-ca-bundle\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.382140 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-fernet-keys\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.394366 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9bsn\" (UniqueName: \"kubernetes.io/projected/7d03980b-1bc7-40e6-890f-e7412777f302-kube-api-access-k9bsn\") pod \"keystone-cron-29398201-8hbgz\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:00 crc kubenswrapper[5028]: I1123 10:01:00.531915 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:01 crc kubenswrapper[5028]: I1123 10:01:01.011133 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29398201-8hbgz"] Nov 23 10:01:01 crc kubenswrapper[5028]: I1123 10:01:01.904173 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398201-8hbgz" event={"ID":"7d03980b-1bc7-40e6-890f-e7412777f302","Type":"ContainerStarted","Data":"2233192eb3150dc625fa0f0c27e649d7427078005efb2bd5d7e6183201718785"} Nov 23 10:01:01 crc kubenswrapper[5028]: I1123 10:01:01.904682 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398201-8hbgz" event={"ID":"7d03980b-1bc7-40e6-890f-e7412777f302","Type":"ContainerStarted","Data":"d1830b7210208ac79ec9a4ae7d1be34e9b999812677ea20d38f9b32aa1c804b6"} Nov 23 10:01:01 crc kubenswrapper[5028]: I1123 10:01:01.936684 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29398201-8hbgz" podStartSLOduration=1.936655591 podStartE2EDuration="1.936655591s" podCreationTimestamp="2025-11-23 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:01:01.926725413 +0000 UTC m=+11445.624130192" watchObservedRunningTime="2025-11-23 10:01:01.936655591 +0000 UTC m=+11445.634060370" Nov 23 10:01:04 crc kubenswrapper[5028]: I1123 10:01:04.952031 5028 generic.go:334] "Generic (PLEG): container finished" podID="7d03980b-1bc7-40e6-890f-e7412777f302" containerID="2233192eb3150dc625fa0f0c27e649d7427078005efb2bd5d7e6183201718785" exitCode=0 Nov 23 10:01:04 crc kubenswrapper[5028]: I1123 10:01:04.952142 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398201-8hbgz" event={"ID":"7d03980b-1bc7-40e6-890f-e7412777f302","Type":"ContainerDied","Data":"2233192eb3150dc625fa0f0c27e649d7427078005efb2bd5d7e6183201718785"} Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.531840 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.639169 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-config-data\") pod \"7d03980b-1bc7-40e6-890f-e7412777f302\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.639227 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-fernet-keys\") pod \"7d03980b-1bc7-40e6-890f-e7412777f302\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.639343 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-combined-ca-bundle\") pod \"7d03980b-1bc7-40e6-890f-e7412777f302\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.639367 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9bsn\" (UniqueName: \"kubernetes.io/projected/7d03980b-1bc7-40e6-890f-e7412777f302-kube-api-access-k9bsn\") pod \"7d03980b-1bc7-40e6-890f-e7412777f302\" (UID: \"7d03980b-1bc7-40e6-890f-e7412777f302\") " Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.646205 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7d03980b-1bc7-40e6-890f-e7412777f302" (UID: "7d03980b-1bc7-40e6-890f-e7412777f302"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.647309 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d03980b-1bc7-40e6-890f-e7412777f302-kube-api-access-k9bsn" (OuterVolumeSpecName: "kube-api-access-k9bsn") pod "7d03980b-1bc7-40e6-890f-e7412777f302" (UID: "7d03980b-1bc7-40e6-890f-e7412777f302"). InnerVolumeSpecName "kube-api-access-k9bsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.677115 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d03980b-1bc7-40e6-890f-e7412777f302" (UID: "7d03980b-1bc7-40e6-890f-e7412777f302"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.724175 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-config-data" (OuterVolumeSpecName: "config-data") pod "7d03980b-1bc7-40e6-890f-e7412777f302" (UID: "7d03980b-1bc7-40e6-890f-e7412777f302"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.742649 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.742710 5028 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.742724 5028 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d03980b-1bc7-40e6-890f-e7412777f302-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.742742 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9bsn\" (UniqueName: \"kubernetes.io/projected/7d03980b-1bc7-40e6-890f-e7412777f302-kube-api-access-k9bsn\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.986070 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29398201-8hbgz" event={"ID":"7d03980b-1bc7-40e6-890f-e7412777f302","Type":"ContainerDied","Data":"d1830b7210208ac79ec9a4ae7d1be34e9b999812677ea20d38f9b32aa1c804b6"} Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.986142 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1830b7210208ac79ec9a4ae7d1be34e9b999812677ea20d38f9b32aa1c804b6" Nov 23 10:01:06 crc kubenswrapper[5028]: I1123 10:01:06.986197 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29398201-8hbgz" Nov 23 10:01:26 crc kubenswrapper[5028]: I1123 10:01:26.995453 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vdq2v"] Nov 23 10:01:26 crc kubenswrapper[5028]: E1123 10:01:26.996560 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d03980b-1bc7-40e6-890f-e7412777f302" containerName="keystone-cron" Nov 23 10:01:26 crc kubenswrapper[5028]: I1123 10:01:26.996578 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d03980b-1bc7-40e6-890f-e7412777f302" containerName="keystone-cron" Nov 23 10:01:26 crc kubenswrapper[5028]: I1123 10:01:26.996885 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d03980b-1bc7-40e6-890f-e7412777f302" containerName="keystone-cron" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.003259 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.021750 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdq2v"] Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.155469 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-utilities\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.155531 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-catalog-content\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.155597 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clt8l\" (UniqueName: \"kubernetes.io/projected/8417c8cd-25db-4064-9691-1b4b4e78baf2-kube-api-access-clt8l\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.257462 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-utilities\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.257568 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-catalog-content\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.257682 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clt8l\" (UniqueName: \"kubernetes.io/projected/8417c8cd-25db-4064-9691-1b4b4e78baf2-kube-api-access-clt8l\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.258057 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-utilities\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.258101 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-catalog-content\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.281855 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clt8l\" (UniqueName: \"kubernetes.io/projected/8417c8cd-25db-4064-9691-1b4b4e78baf2-kube-api-access-clt8l\") pod \"redhat-operators-vdq2v\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.331288 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:27 crc kubenswrapper[5028]: I1123 10:01:27.915303 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdq2v"] Nov 23 10:01:28 crc kubenswrapper[5028]: I1123 10:01:28.285167 5028 generic.go:334] "Generic (PLEG): container finished" podID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerID="8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd" exitCode=0 Nov 23 10:01:28 crc kubenswrapper[5028]: I1123 10:01:28.285255 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerDied","Data":"8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd"} Nov 23 10:01:28 crc kubenswrapper[5028]: I1123 10:01:28.285415 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerStarted","Data":"4ee8a86ca779e5ccd6e7dfe4d42cdec4a1498165860008eb0a5084dac2bb655b"} Nov 23 10:01:29 crc kubenswrapper[5028]: I1123 10:01:29.302554 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerStarted","Data":"335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538"} Nov 23 10:01:34 crc kubenswrapper[5028]: I1123 10:01:34.376672 5028 generic.go:334] "Generic (PLEG): container finished" podID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerID="335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538" exitCode=0 Nov 23 10:01:34 crc kubenswrapper[5028]: I1123 10:01:34.376827 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerDied","Data":"335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538"} Nov 23 10:01:34 crc kubenswrapper[5028]: I1123 10:01:34.383145 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 10:01:35 crc kubenswrapper[5028]: I1123 10:01:35.397236 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerStarted","Data":"18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1"} Nov 23 10:01:37 crc kubenswrapper[5028]: I1123 10:01:37.332890 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:37 crc kubenswrapper[5028]: I1123 10:01:37.333286 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:38 crc kubenswrapper[5028]: I1123 10:01:38.389837 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vdq2v" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="registry-server" probeResult="failure" output=< Nov 23 10:01:38 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 10:01:38 crc kubenswrapper[5028]: > Nov 23 10:01:47 crc kubenswrapper[5028]: I1123 10:01:47.393829 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:47 crc kubenswrapper[5028]: I1123 10:01:47.426889 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vdq2v" podStartSLOduration=14.758732091 podStartE2EDuration="21.426865705s" podCreationTimestamp="2025-11-23 10:01:26 +0000 UTC" firstStartedPulling="2025-11-23 10:01:28.287402174 +0000 UTC m=+11471.984806943" lastFinishedPulling="2025-11-23 10:01:34.955535748 +0000 UTC m=+11478.652940557" observedRunningTime="2025-11-23 10:01:35.420125695 +0000 UTC m=+11479.117530514" watchObservedRunningTime="2025-11-23 10:01:47.426865705 +0000 UTC m=+11491.124270484" Nov 23 10:01:47 crc kubenswrapper[5028]: I1123 10:01:47.470055 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:47 crc kubenswrapper[5028]: I1123 10:01:47.643185 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vdq2v"] Nov 23 10:01:48 crc kubenswrapper[5028]: I1123 10:01:48.577870 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vdq2v" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="registry-server" containerID="cri-o://18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1" gracePeriod=2 Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.362974 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.473428 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-utilities\") pod \"8417c8cd-25db-4064-9691-1b4b4e78baf2\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.473665 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clt8l\" (UniqueName: \"kubernetes.io/projected/8417c8cd-25db-4064-9691-1b4b4e78baf2-kube-api-access-clt8l\") pod \"8417c8cd-25db-4064-9691-1b4b4e78baf2\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.473709 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-catalog-content\") pod \"8417c8cd-25db-4064-9691-1b4b4e78baf2\" (UID: \"8417c8cd-25db-4064-9691-1b4b4e78baf2\") " Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.474338 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-utilities" (OuterVolumeSpecName: "utilities") pod "8417c8cd-25db-4064-9691-1b4b4e78baf2" (UID: "8417c8cd-25db-4064-9691-1b4b4e78baf2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.474767 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.479458 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8417c8cd-25db-4064-9691-1b4b4e78baf2-kube-api-access-clt8l" (OuterVolumeSpecName: "kube-api-access-clt8l") pod "8417c8cd-25db-4064-9691-1b4b4e78baf2" (UID: "8417c8cd-25db-4064-9691-1b4b4e78baf2"). InnerVolumeSpecName "kube-api-access-clt8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.570835 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8417c8cd-25db-4064-9691-1b4b4e78baf2" (UID: "8417c8cd-25db-4064-9691-1b4b4e78baf2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.577302 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clt8l\" (UniqueName: \"kubernetes.io/projected/8417c8cd-25db-4064-9691-1b4b4e78baf2-kube-api-access-clt8l\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.577336 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8417c8cd-25db-4064-9691-1b4b4e78baf2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.591261 5028 generic.go:334] "Generic (PLEG): container finished" podID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerID="18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1" exitCode=0 Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.591310 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerDied","Data":"18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1"} Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.591357 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdq2v" event={"ID":"8417c8cd-25db-4064-9691-1b4b4e78baf2","Type":"ContainerDied","Data":"4ee8a86ca779e5ccd6e7dfe4d42cdec4a1498165860008eb0a5084dac2bb655b"} Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.591377 5028 scope.go:117] "RemoveContainer" containerID="18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.591577 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdq2v" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.643101 5028 scope.go:117] "RemoveContainer" containerID="335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.658507 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vdq2v"] Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.678522 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vdq2v"] Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.697598 5028 scope.go:117] "RemoveContainer" containerID="8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.743729 5028 scope.go:117] "RemoveContainer" containerID="18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1" Nov 23 10:01:49 crc kubenswrapper[5028]: E1123 10:01:49.744482 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1\": container with ID starting with 18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1 not found: ID does not exist" containerID="18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.744532 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1"} err="failed to get container status \"18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1\": rpc error: code = NotFound desc = could not find container \"18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1\": container with ID starting with 18fad63fd04a7799f68858d3d3e0f10f7ce44421984e65944a36cab1ee08f8f1 not found: ID does not exist" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.744565 5028 scope.go:117] "RemoveContainer" containerID="335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538" Nov 23 10:01:49 crc kubenswrapper[5028]: E1123 10:01:49.744941 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538\": container with ID starting with 335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538 not found: ID does not exist" containerID="335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.745030 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538"} err="failed to get container status \"335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538\": rpc error: code = NotFound desc = could not find container \"335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538\": container with ID starting with 335a631a8febe3d6e08755de9995fcecda660a970948994b0fe0cdcabcccf538 not found: ID does not exist" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.745074 5028 scope.go:117] "RemoveContainer" containerID="8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd" Nov 23 10:01:49 crc kubenswrapper[5028]: E1123 10:01:49.745841 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd\": container with ID starting with 8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd not found: ID does not exist" containerID="8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd" Nov 23 10:01:49 crc kubenswrapper[5028]: I1123 10:01:49.745868 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd"} err="failed to get container status \"8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd\": rpc error: code = NotFound desc = could not find container \"8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd\": container with ID starting with 8166cac8b73b2502805dc49a777c0a4d5e9d52e027eebcaa33d7b54983fbfafd not found: ID does not exist" Nov 23 10:01:51 crc kubenswrapper[5028]: I1123 10:01:51.064192 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" path="/var/lib/kubelet/pods/8417c8cd-25db-4064-9691-1b4b4e78baf2/volumes" Nov 23 10:02:00 crc kubenswrapper[5028]: I1123 10:02:00.946447 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:02:00 crc kubenswrapper[5028]: I1123 10:02:00.947432 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:02:30 crc kubenswrapper[5028]: I1123 10:02:30.946784 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:02:30 crc kubenswrapper[5028]: I1123 10:02:30.948233 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.289333 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qvkxl"] Nov 23 10:02:34 crc kubenswrapper[5028]: E1123 10:02:34.290382 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="extract-content" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.290399 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="extract-content" Nov 23 10:02:34 crc kubenswrapper[5028]: E1123 10:02:34.290415 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="extract-utilities" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.290420 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="extract-utilities" Nov 23 10:02:34 crc kubenswrapper[5028]: E1123 10:02:34.290441 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="registry-server" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.290447 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="registry-server" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.290663 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8417c8cd-25db-4064-9691-1b4b4e78baf2" containerName="registry-server" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.295128 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.318656 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6275\" (UniqueName: \"kubernetes.io/projected/8f9aa9c2-16c2-48ce-95a7-a423e0309409-kube-api-access-p6275\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.319135 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-catalog-content\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.319284 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-utilities\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.336015 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvkxl"] Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.421486 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-utilities\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.421607 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6275\" (UniqueName: \"kubernetes.io/projected/8f9aa9c2-16c2-48ce-95a7-a423e0309409-kube-api-access-p6275\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.421735 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-catalog-content\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.422223 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-catalog-content\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.422407 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-utilities\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.453887 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6275\" (UniqueName: \"kubernetes.io/projected/8f9aa9c2-16c2-48ce-95a7-a423e0309409-kube-api-access-p6275\") pod \"certified-operators-qvkxl\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:34 crc kubenswrapper[5028]: I1123 10:02:34.624362 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:35 crc kubenswrapper[5028]: I1123 10:02:35.232986 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qvkxl"] Nov 23 10:02:36 crc kubenswrapper[5028]: I1123 10:02:36.156122 5028 generic.go:334] "Generic (PLEG): container finished" podID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerID="a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403" exitCode=0 Nov 23 10:02:36 crc kubenswrapper[5028]: I1123 10:02:36.156370 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerDied","Data":"a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403"} Nov 23 10:02:36 crc kubenswrapper[5028]: I1123 10:02:36.156402 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerStarted","Data":"699d541b52c4f4072de7d522fc496c1739909726182d426bdcc4fd0d4337d2b4"} Nov 23 10:02:37 crc kubenswrapper[5028]: I1123 10:02:37.167552 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerStarted","Data":"d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618"} Nov 23 10:02:39 crc kubenswrapper[5028]: I1123 10:02:39.192218 5028 generic.go:334] "Generic (PLEG): container finished" podID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerID="d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618" exitCode=0 Nov 23 10:02:39 crc kubenswrapper[5028]: I1123 10:02:39.192272 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerDied","Data":"d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618"} Nov 23 10:02:40 crc kubenswrapper[5028]: I1123 10:02:40.207088 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerStarted","Data":"0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403"} Nov 23 10:02:40 crc kubenswrapper[5028]: I1123 10:02:40.237154 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qvkxl" podStartSLOduration=2.733445711 podStartE2EDuration="6.237124323s" podCreationTimestamp="2025-11-23 10:02:34 +0000 UTC" firstStartedPulling="2025-11-23 10:02:36.159895826 +0000 UTC m=+11539.857300605" lastFinishedPulling="2025-11-23 10:02:39.663574418 +0000 UTC m=+11543.360979217" observedRunningTime="2025-11-23 10:02:40.224901838 +0000 UTC m=+11543.922306617" watchObservedRunningTime="2025-11-23 10:02:40.237124323 +0000 UTC m=+11543.934529102" Nov 23 10:02:44 crc kubenswrapper[5028]: I1123 10:02:44.625253 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:44 crc kubenswrapper[5028]: I1123 10:02:44.625726 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:44 crc kubenswrapper[5028]: I1123 10:02:44.682222 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:45 crc kubenswrapper[5028]: I1123 10:02:45.320139 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:45 crc kubenswrapper[5028]: I1123 10:02:45.382708 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvkxl"] Nov 23 10:02:47 crc kubenswrapper[5028]: I1123 10:02:47.284527 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qvkxl" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="registry-server" containerID="cri-o://0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403" gracePeriod=2 Nov 23 10:02:47 crc kubenswrapper[5028]: I1123 10:02:47.944255 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.086835 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-utilities\") pod \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.087111 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6275\" (UniqueName: \"kubernetes.io/projected/8f9aa9c2-16c2-48ce-95a7-a423e0309409-kube-api-access-p6275\") pod \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.087154 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-catalog-content\") pod \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\" (UID: \"8f9aa9c2-16c2-48ce-95a7-a423e0309409\") " Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.088214 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-utilities" (OuterVolumeSpecName: "utilities") pod "8f9aa9c2-16c2-48ce-95a7-a423e0309409" (UID: "8f9aa9c2-16c2-48ce-95a7-a423e0309409"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.092812 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9aa9c2-16c2-48ce-95a7-a423e0309409-kube-api-access-p6275" (OuterVolumeSpecName: "kube-api-access-p6275") pod "8f9aa9c2-16c2-48ce-95a7-a423e0309409" (UID: "8f9aa9c2-16c2-48ce-95a7-a423e0309409"). InnerVolumeSpecName "kube-api-access-p6275". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.136032 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f9aa9c2-16c2-48ce-95a7-a423e0309409" (UID: "8f9aa9c2-16c2-48ce-95a7-a423e0309409"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.189700 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.189733 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f9aa9c2-16c2-48ce-95a7-a423e0309409-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.189743 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6275\" (UniqueName: \"kubernetes.io/projected/8f9aa9c2-16c2-48ce-95a7-a423e0309409-kube-api-access-p6275\") on node \"crc\" DevicePath \"\"" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.299559 5028 generic.go:334] "Generic (PLEG): container finished" podID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerID="0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403" exitCode=0 Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.299607 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerDied","Data":"0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403"} Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.299638 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qvkxl" event={"ID":"8f9aa9c2-16c2-48ce-95a7-a423e0309409","Type":"ContainerDied","Data":"699d541b52c4f4072de7d522fc496c1739909726182d426bdcc4fd0d4337d2b4"} Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.299660 5028 scope.go:117] "RemoveContainer" containerID="0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.299810 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qvkxl" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.329666 5028 scope.go:117] "RemoveContainer" containerID="d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.347366 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qvkxl"] Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.371898 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qvkxl"] Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.380272 5028 scope.go:117] "RemoveContainer" containerID="a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.456650 5028 scope.go:117] "RemoveContainer" containerID="0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403" Nov 23 10:02:48 crc kubenswrapper[5028]: E1123 10:02:48.458285 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403\": container with ID starting with 0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403 not found: ID does not exist" containerID="0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.458352 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403"} err="failed to get container status \"0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403\": rpc error: code = NotFound desc = could not find container \"0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403\": container with ID starting with 0a9caeaae9330dd58e647425a9bc1fe8bd2c1dcc1b3a4f5ed2c829ffe3912403 not found: ID does not exist" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.458393 5028 scope.go:117] "RemoveContainer" containerID="d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618" Nov 23 10:02:48 crc kubenswrapper[5028]: E1123 10:02:48.459023 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618\": container with ID starting with d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618 not found: ID does not exist" containerID="d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.459088 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618"} err="failed to get container status \"d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618\": rpc error: code = NotFound desc = could not find container \"d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618\": container with ID starting with d4dea239a67212b50a9e0dac59ba2a2b3242e52a90d1a2ee6fbdeaa21cc1b618 not found: ID does not exist" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.459130 5028 scope.go:117] "RemoveContainer" containerID="a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403" Nov 23 10:02:48 crc kubenswrapper[5028]: E1123 10:02:48.459459 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403\": container with ID starting with a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403 not found: ID does not exist" containerID="a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403" Nov 23 10:02:48 crc kubenswrapper[5028]: I1123 10:02:48.459504 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403"} err="failed to get container status \"a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403\": rpc error: code = NotFound desc = could not find container \"a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403\": container with ID starting with a27cbfe1e75c04502ccbc4d879baeb93bc7561701337bab74615f5fcc6b63403 not found: ID does not exist" Nov 23 10:02:49 crc kubenswrapper[5028]: I1123 10:02:49.115731 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" path="/var/lib/kubelet/pods/8f9aa9c2-16c2-48ce-95a7-a423e0309409/volumes" Nov 23 10:03:00 crc kubenswrapper[5028]: I1123 10:03:00.946645 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:03:00 crc kubenswrapper[5028]: I1123 10:03:00.947343 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:03:00 crc kubenswrapper[5028]: I1123 10:03:00.947417 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 10:03:00 crc kubenswrapper[5028]: I1123 10:03:00.948832 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d5303c4e4eab7807ae81bdd8c35a4de44f9bf3ce08b3f35c4d5ab32e27ac71a"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 10:03:00 crc kubenswrapper[5028]: I1123 10:03:00.948933 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://7d5303c4e4eab7807ae81bdd8c35a4de44f9bf3ce08b3f35c4d5ab32e27ac71a" gracePeriod=600 Nov 23 10:03:01 crc kubenswrapper[5028]: I1123 10:03:01.493271 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="7d5303c4e4eab7807ae81bdd8c35a4de44f9bf3ce08b3f35c4d5ab32e27ac71a" exitCode=0 Nov 23 10:03:01 crc kubenswrapper[5028]: I1123 10:03:01.493360 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"7d5303c4e4eab7807ae81bdd8c35a4de44f9bf3ce08b3f35c4d5ab32e27ac71a"} Nov 23 10:03:01 crc kubenswrapper[5028]: I1123 10:03:01.494076 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b"} Nov 23 10:03:01 crc kubenswrapper[5028]: I1123 10:03:01.494156 5028 scope.go:117] "RemoveContainer" containerID="9d4d41b00f98ac36a2c9bb024f1a4c2c9a76eda3c59b993afa0cdcb616a0ef47" Nov 23 10:05:30 crc kubenswrapper[5028]: I1123 10:05:30.946323 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:05:30 crc kubenswrapper[5028]: I1123 10:05:30.946831 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.285391 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tpc"] Nov 23 10:05:54 crc kubenswrapper[5028]: E1123 10:05:54.286602 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="extract-content" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.286620 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="extract-content" Nov 23 10:05:54 crc kubenswrapper[5028]: E1123 10:05:54.286649 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="extract-utilities" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.286658 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="extract-utilities" Nov 23 10:05:54 crc kubenswrapper[5028]: E1123 10:05:54.286672 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="registry-server" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.286682 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="registry-server" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.287008 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9aa9c2-16c2-48ce-95a7-a423e0309409" containerName="registry-server" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.289305 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.338138 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-utilities\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.338283 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-catalog-content\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.338536 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrx97\" (UniqueName: \"kubernetes.io/projected/56335d62-824f-4dab-bdb3-0c90ff24be98-kube-api-access-mrx97\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.346175 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tpc"] Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.441200 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrx97\" (UniqueName: \"kubernetes.io/projected/56335d62-824f-4dab-bdb3-0c90ff24be98-kube-api-access-mrx97\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.441300 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-utilities\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.441357 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-catalog-content\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.441817 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-catalog-content\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.441904 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-utilities\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.468109 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrx97\" (UniqueName: \"kubernetes.io/projected/56335d62-824f-4dab-bdb3-0c90ff24be98-kube-api-access-mrx97\") pod \"redhat-marketplace-j8tpc\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:54 crc kubenswrapper[5028]: I1123 10:05:54.629761 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:05:55 crc kubenswrapper[5028]: I1123 10:05:55.175885 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tpc"] Nov 23 10:05:56 crc kubenswrapper[5028]: I1123 10:05:56.112260 5028 generic.go:334] "Generic (PLEG): container finished" podID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerID="12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39" exitCode=0 Nov 23 10:05:56 crc kubenswrapper[5028]: I1123 10:05:56.112360 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tpc" event={"ID":"56335d62-824f-4dab-bdb3-0c90ff24be98","Type":"ContainerDied","Data":"12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39"} Nov 23 10:05:56 crc kubenswrapper[5028]: I1123 10:05:56.112515 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tpc" event={"ID":"56335d62-824f-4dab-bdb3-0c90ff24be98","Type":"ContainerStarted","Data":"d2cc5e9673f326e08ebf4e25f079072c6cf484917cd7a9904c1af0c75d680322"} Nov 23 10:05:58 crc kubenswrapper[5028]: I1123 10:05:58.145219 5028 generic.go:334] "Generic (PLEG): container finished" podID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerID="77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf" exitCode=0 Nov 23 10:05:58 crc kubenswrapper[5028]: I1123 10:05:58.145297 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tpc" event={"ID":"56335d62-824f-4dab-bdb3-0c90ff24be98","Type":"ContainerDied","Data":"77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf"} Nov 23 10:05:59 crc kubenswrapper[5028]: I1123 10:05:59.176305 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tpc" event={"ID":"56335d62-824f-4dab-bdb3-0c90ff24be98","Type":"ContainerStarted","Data":"88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d"} Nov 23 10:05:59 crc kubenswrapper[5028]: I1123 10:05:59.201704 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8tpc" podStartSLOduration=2.772091261 podStartE2EDuration="5.201663365s" podCreationTimestamp="2025-11-23 10:05:54 +0000 UTC" firstStartedPulling="2025-11-23 10:05:56.121745381 +0000 UTC m=+11739.819150200" lastFinishedPulling="2025-11-23 10:05:58.551317525 +0000 UTC m=+11742.248722304" observedRunningTime="2025-11-23 10:05:59.198307491 +0000 UTC m=+11742.895712280" watchObservedRunningTime="2025-11-23 10:05:59.201663365 +0000 UTC m=+11742.899068144" Nov 23 10:06:00 crc kubenswrapper[5028]: I1123 10:06:00.945966 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:06:00 crc kubenswrapper[5028]: I1123 10:06:00.946371 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:06:04 crc kubenswrapper[5028]: I1123 10:06:04.630827 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:06:04 crc kubenswrapper[5028]: I1123 10:06:04.631364 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:06:04 crc kubenswrapper[5028]: I1123 10:06:04.705843 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:06:05 crc kubenswrapper[5028]: I1123 10:06:05.337623 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:06:05 crc kubenswrapper[5028]: I1123 10:06:05.424218 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tpc"] Nov 23 10:06:07 crc kubenswrapper[5028]: I1123 10:06:07.284255 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8tpc" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="registry-server" containerID="cri-o://88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d" gracePeriod=2 Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.123206 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.198352 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrx97\" (UniqueName: \"kubernetes.io/projected/56335d62-824f-4dab-bdb3-0c90ff24be98-kube-api-access-mrx97\") pod \"56335d62-824f-4dab-bdb3-0c90ff24be98\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.198435 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-utilities\") pod \"56335d62-824f-4dab-bdb3-0c90ff24be98\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.198544 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-catalog-content\") pod \"56335d62-824f-4dab-bdb3-0c90ff24be98\" (UID: \"56335d62-824f-4dab-bdb3-0c90ff24be98\") " Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.200300 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-utilities" (OuterVolumeSpecName: "utilities") pod "56335d62-824f-4dab-bdb3-0c90ff24be98" (UID: "56335d62-824f-4dab-bdb3-0c90ff24be98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.207832 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56335d62-824f-4dab-bdb3-0c90ff24be98-kube-api-access-mrx97" (OuterVolumeSpecName: "kube-api-access-mrx97") pod "56335d62-824f-4dab-bdb3-0c90ff24be98" (UID: "56335d62-824f-4dab-bdb3-0c90ff24be98"). InnerVolumeSpecName "kube-api-access-mrx97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.224193 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56335d62-824f-4dab-bdb3-0c90ff24be98" (UID: "56335d62-824f-4dab-bdb3-0c90ff24be98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.303084 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.303126 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56335d62-824f-4dab-bdb3-0c90ff24be98-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.303145 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrx97\" (UniqueName: \"kubernetes.io/projected/56335d62-824f-4dab-bdb3-0c90ff24be98-kube-api-access-mrx97\") on node \"crc\" DevicePath \"\"" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.340686 5028 generic.go:334] "Generic (PLEG): container finished" podID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerID="88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d" exitCode=0 Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.340738 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tpc" event={"ID":"56335d62-824f-4dab-bdb3-0c90ff24be98","Type":"ContainerDied","Data":"88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d"} Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.340768 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8tpc" event={"ID":"56335d62-824f-4dab-bdb3-0c90ff24be98","Type":"ContainerDied","Data":"d2cc5e9673f326e08ebf4e25f079072c6cf484917cd7a9904c1af0c75d680322"} Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.340787 5028 scope.go:117] "RemoveContainer" containerID="88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.340987 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8tpc" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.383671 5028 scope.go:117] "RemoveContainer" containerID="77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.390011 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tpc"] Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.401686 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8tpc"] Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.422079 5028 scope.go:117] "RemoveContainer" containerID="12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.468356 5028 scope.go:117] "RemoveContainer" containerID="88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d" Nov 23 10:06:08 crc kubenswrapper[5028]: E1123 10:06:08.468729 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d\": container with ID starting with 88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d not found: ID does not exist" containerID="88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.468761 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d"} err="failed to get container status \"88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d\": rpc error: code = NotFound desc = could not find container \"88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d\": container with ID starting with 88a60bf84f1a861961442d7bfdd94b2d866f0b556a1a797dcf6624315365bc2d not found: ID does not exist" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.468783 5028 scope.go:117] "RemoveContainer" containerID="77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf" Nov 23 10:06:08 crc kubenswrapper[5028]: E1123 10:06:08.468980 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf\": container with ID starting with 77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf not found: ID does not exist" containerID="77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.469005 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf"} err="failed to get container status \"77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf\": rpc error: code = NotFound desc = could not find container \"77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf\": container with ID starting with 77ff40e72afb5d741fe8da75a091f7262fb4ec8b9e2dd1410ca454a8aa0abddf not found: ID does not exist" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.469017 5028 scope.go:117] "RemoveContainer" containerID="12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39" Nov 23 10:06:08 crc kubenswrapper[5028]: E1123 10:06:08.469180 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39\": container with ID starting with 12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39 not found: ID does not exist" containerID="12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39" Nov 23 10:06:08 crc kubenswrapper[5028]: I1123 10:06:08.469200 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39"} err="failed to get container status \"12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39\": rpc error: code = NotFound desc = could not find container \"12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39\": container with ID starting with 12f98792608e71626fd03c29d0a1d494edec6b2c2f2e6cd14b3cf39ebc136f39 not found: ID does not exist" Nov 23 10:06:09 crc kubenswrapper[5028]: I1123 10:06:09.067789 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" path="/var/lib/kubelet/pods/56335d62-824f-4dab-bdb3-0c90ff24be98/volumes" Nov 23 10:06:30 crc kubenswrapper[5028]: I1123 10:06:30.946757 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:06:30 crc kubenswrapper[5028]: I1123 10:06:30.947546 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:06:30 crc kubenswrapper[5028]: I1123 10:06:30.947623 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 10:06:30 crc kubenswrapper[5028]: I1123 10:06:30.949158 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 10:06:30 crc kubenswrapper[5028]: I1123 10:06:30.949274 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" gracePeriod=600 Nov 23 10:06:31 crc kubenswrapper[5028]: E1123 10:06:31.077491 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:06:31 crc kubenswrapper[5028]: I1123 10:06:31.643223 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" exitCode=0 Nov 23 10:06:31 crc kubenswrapper[5028]: I1123 10:06:31.643515 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b"} Nov 23 10:06:31 crc kubenswrapper[5028]: I1123 10:06:31.643745 5028 scope.go:117] "RemoveContainer" containerID="7d5303c4e4eab7807ae81bdd8c35a4de44f9bf3ce08b3f35c4d5ab32e27ac71a" Nov 23 10:06:31 crc kubenswrapper[5028]: I1123 10:06:31.645285 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:06:31 crc kubenswrapper[5028]: E1123 10:06:31.646100 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:06:46 crc kubenswrapper[5028]: I1123 10:06:46.053731 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:06:46 crc kubenswrapper[5028]: E1123 10:06:46.054499 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:07:00 crc kubenswrapper[5028]: I1123 10:07:00.053406 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:07:00 crc kubenswrapper[5028]: E1123 10:07:00.054149 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:07:12 crc kubenswrapper[5028]: I1123 10:07:12.054897 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:07:12 crc kubenswrapper[5028]: E1123 10:07:12.056342 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:07:25 crc kubenswrapper[5028]: I1123 10:07:25.053650 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:07:25 crc kubenswrapper[5028]: E1123 10:07:25.055008 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:07:38 crc kubenswrapper[5028]: I1123 10:07:38.054370 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:07:38 crc kubenswrapper[5028]: E1123 10:07:38.055165 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:07:53 crc kubenswrapper[5028]: I1123 10:07:53.054278 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:07:53 crc kubenswrapper[5028]: E1123 10:07:53.055129 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:08:04 crc kubenswrapper[5028]: I1123 10:08:04.054757 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:08:04 crc kubenswrapper[5028]: E1123 10:08:04.056870 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:08:15 crc kubenswrapper[5028]: I1123 10:08:15.053494 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:08:15 crc kubenswrapper[5028]: E1123 10:08:15.054652 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:08:26 crc kubenswrapper[5028]: I1123 10:08:26.053777 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:08:26 crc kubenswrapper[5028]: E1123 10:08:26.055098 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:08:40 crc kubenswrapper[5028]: I1123 10:08:40.054251 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:08:40 crc kubenswrapper[5028]: E1123 10:08:40.055439 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.498248 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tndtn"] Nov 23 10:08:41 crc kubenswrapper[5028]: E1123 10:08:41.499112 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="extract-content" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.499129 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="extract-content" Nov 23 10:08:41 crc kubenswrapper[5028]: E1123 10:08:41.499176 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="extract-utilities" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.499185 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="extract-utilities" Nov 23 10:08:41 crc kubenswrapper[5028]: E1123 10:08:41.499209 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="registry-server" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.499218 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="registry-server" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.499506 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="56335d62-824f-4dab-bdb3-0c90ff24be98" containerName="registry-server" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.501479 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.517146 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tndtn"] Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.583256 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmd2d\" (UniqueName: \"kubernetes.io/projected/871774ed-be11-4287-92aa-ffd8b6c7cfa3-kube-api-access-nmd2d\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.583696 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-catalog-content\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.583798 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-utilities\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.686405 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmd2d\" (UniqueName: \"kubernetes.io/projected/871774ed-be11-4287-92aa-ffd8b6c7cfa3-kube-api-access-nmd2d\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.686484 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-catalog-content\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.686547 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-utilities\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.687196 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-utilities\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.687831 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-catalog-content\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.708212 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmd2d\" (UniqueName: \"kubernetes.io/projected/871774ed-be11-4287-92aa-ffd8b6c7cfa3-kube-api-access-nmd2d\") pod \"community-operators-tndtn\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:41 crc kubenswrapper[5028]: I1123 10:08:41.830753 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:42 crc kubenswrapper[5028]: I1123 10:08:42.481165 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tndtn"] Nov 23 10:08:42 crc kubenswrapper[5028]: I1123 10:08:42.543236 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerStarted","Data":"321694fc763b061f1ac5d5d38df42f983a62bd3b558659fb2789c1eacb37d8af"} Nov 23 10:08:43 crc kubenswrapper[5028]: I1123 10:08:43.575264 5028 generic.go:334] "Generic (PLEG): container finished" podID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerID="c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe" exitCode=0 Nov 23 10:08:43 crc kubenswrapper[5028]: I1123 10:08:43.575359 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerDied","Data":"c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe"} Nov 23 10:08:43 crc kubenswrapper[5028]: I1123 10:08:43.578370 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 10:08:44 crc kubenswrapper[5028]: I1123 10:08:44.590007 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerStarted","Data":"f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d"} Nov 23 10:08:46 crc kubenswrapper[5028]: I1123 10:08:46.614586 5028 generic.go:334] "Generic (PLEG): container finished" podID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerID="f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d" exitCode=0 Nov 23 10:08:46 crc kubenswrapper[5028]: I1123 10:08:46.614651 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerDied","Data":"f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d"} Nov 23 10:08:47 crc kubenswrapper[5028]: I1123 10:08:47.635049 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerStarted","Data":"a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74"} Nov 23 10:08:47 crc kubenswrapper[5028]: I1123 10:08:47.660337 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tndtn" podStartSLOduration=3.24578716 podStartE2EDuration="6.660313329s" podCreationTimestamp="2025-11-23 10:08:41 +0000 UTC" firstStartedPulling="2025-11-23 10:08:43.578029976 +0000 UTC m=+11907.275434755" lastFinishedPulling="2025-11-23 10:08:46.992556145 +0000 UTC m=+11910.689960924" observedRunningTime="2025-11-23 10:08:47.658872663 +0000 UTC m=+11911.356277452" watchObservedRunningTime="2025-11-23 10:08:47.660313329 +0000 UTC m=+11911.357718108" Nov 23 10:08:51 crc kubenswrapper[5028]: I1123 10:08:51.830992 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:51 crc kubenswrapper[5028]: I1123 10:08:51.831446 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:51 crc kubenswrapper[5028]: I1123 10:08:51.911738 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:52 crc kubenswrapper[5028]: I1123 10:08:52.743837 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:52 crc kubenswrapper[5028]: I1123 10:08:52.803613 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tndtn"] Nov 23 10:08:53 crc kubenswrapper[5028]: I1123 10:08:53.054045 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:08:53 crc kubenswrapper[5028]: E1123 10:08:53.054343 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:08:54 crc kubenswrapper[5028]: I1123 10:08:54.717877 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tndtn" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="registry-server" containerID="cri-o://a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74" gracePeriod=2 Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.455721 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.558070 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-catalog-content\") pod \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.558536 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmd2d\" (UniqueName: \"kubernetes.io/projected/871774ed-be11-4287-92aa-ffd8b6c7cfa3-kube-api-access-nmd2d\") pod \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.558646 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-utilities\") pod \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\" (UID: \"871774ed-be11-4287-92aa-ffd8b6c7cfa3\") " Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.559519 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-utilities" (OuterVolumeSpecName: "utilities") pod "871774ed-be11-4287-92aa-ffd8b6c7cfa3" (UID: "871774ed-be11-4287-92aa-ffd8b6c7cfa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.566246 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871774ed-be11-4287-92aa-ffd8b6c7cfa3-kube-api-access-nmd2d" (OuterVolumeSpecName: "kube-api-access-nmd2d") pod "871774ed-be11-4287-92aa-ffd8b6c7cfa3" (UID: "871774ed-be11-4287-92aa-ffd8b6c7cfa3"). InnerVolumeSpecName "kube-api-access-nmd2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.622391 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "871774ed-be11-4287-92aa-ffd8b6c7cfa3" (UID: "871774ed-be11-4287-92aa-ffd8b6c7cfa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.660414 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.660447 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmd2d\" (UniqueName: \"kubernetes.io/projected/871774ed-be11-4287-92aa-ffd8b6c7cfa3-kube-api-access-nmd2d\") on node \"crc\" DevicePath \"\"" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.660460 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/871774ed-be11-4287-92aa-ffd8b6c7cfa3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.730708 5028 generic.go:334] "Generic (PLEG): container finished" podID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerID="a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74" exitCode=0 Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.730764 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerDied","Data":"a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74"} Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.730792 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tndtn" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.730818 5028 scope.go:117] "RemoveContainer" containerID="a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.730804 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tndtn" event={"ID":"871774ed-be11-4287-92aa-ffd8b6c7cfa3","Type":"ContainerDied","Data":"321694fc763b061f1ac5d5d38df42f983a62bd3b558659fb2789c1eacb37d8af"} Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.766996 5028 scope.go:117] "RemoveContainer" containerID="f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.779307 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tndtn"] Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.790317 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tndtn"] Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.804424 5028 scope.go:117] "RemoveContainer" containerID="c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.859670 5028 scope.go:117] "RemoveContainer" containerID="a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74" Nov 23 10:08:55 crc kubenswrapper[5028]: E1123 10:08:55.860582 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74\": container with ID starting with a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74 not found: ID does not exist" containerID="a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.860647 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74"} err="failed to get container status \"a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74\": rpc error: code = NotFound desc = could not find container \"a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74\": container with ID starting with a969ed905130d4daf06a139a1d2a1d51c4c82860dcd7688adf6fadba9c6bca74 not found: ID does not exist" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.860681 5028 scope.go:117] "RemoveContainer" containerID="f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d" Nov 23 10:08:55 crc kubenswrapper[5028]: E1123 10:08:55.861110 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d\": container with ID starting with f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d not found: ID does not exist" containerID="f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.861179 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d"} err="failed to get container status \"f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d\": rpc error: code = NotFound desc = could not find container \"f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d\": container with ID starting with f27659bd254a6cf90937d498b4367f24578ae14d271297a5041d22140ae4753d not found: ID does not exist" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.861221 5028 scope.go:117] "RemoveContainer" containerID="c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe" Nov 23 10:08:55 crc kubenswrapper[5028]: E1123 10:08:55.861714 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe\": container with ID starting with c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe not found: ID does not exist" containerID="c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe" Nov 23 10:08:55 crc kubenswrapper[5028]: I1123 10:08:55.861761 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe"} err="failed to get container status \"c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe\": rpc error: code = NotFound desc = could not find container \"c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe\": container with ID starting with c62f6318cd902606924b0815cd411071d1d266853ef191f1ae70eda9a8905cfe not found: ID does not exist" Nov 23 10:08:57 crc kubenswrapper[5028]: I1123 10:08:57.064523 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" path="/var/lib/kubelet/pods/871774ed-be11-4287-92aa-ffd8b6c7cfa3/volumes" Nov 23 10:09:07 crc kubenswrapper[5028]: I1123 10:09:07.068100 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:09:07 crc kubenswrapper[5028]: E1123 10:09:07.069142 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:09:18 crc kubenswrapper[5028]: I1123 10:09:18.054254 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:09:18 crc kubenswrapper[5028]: E1123 10:09:18.055330 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:09:32 crc kubenswrapper[5028]: I1123 10:09:32.054168 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:09:32 crc kubenswrapper[5028]: E1123 10:09:32.054979 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:09:45 crc kubenswrapper[5028]: I1123 10:09:45.053856 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:09:45 crc kubenswrapper[5028]: E1123 10:09:45.054696 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:10:00 crc kubenswrapper[5028]: I1123 10:10:00.054745 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:10:00 crc kubenswrapper[5028]: E1123 10:10:00.055860 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:10:14 crc kubenswrapper[5028]: I1123 10:10:14.053387 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:10:14 crc kubenswrapper[5028]: E1123 10:10:14.054512 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:10:28 crc kubenswrapper[5028]: I1123 10:10:28.059141 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:10:28 crc kubenswrapper[5028]: E1123 10:10:28.061102 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:10:42 crc kubenswrapper[5028]: I1123 10:10:42.053774 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:10:42 crc kubenswrapper[5028]: E1123 10:10:42.054758 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:10:54 crc kubenswrapper[5028]: I1123 10:10:54.054615 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:10:54 crc kubenswrapper[5028]: E1123 10:10:54.055843 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:11:05 crc kubenswrapper[5028]: I1123 10:11:05.054725 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:11:05 crc kubenswrapper[5028]: E1123 10:11:05.055922 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:11:20 crc kubenswrapper[5028]: I1123 10:11:20.053879 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:11:20 crc kubenswrapper[5028]: E1123 10:11:20.055161 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:11:32 crc kubenswrapper[5028]: I1123 10:11:32.053924 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:11:33 crc kubenswrapper[5028]: I1123 10:11:33.013711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"fbae17c4db9708f07496e8e8f9ccd88b3899177fc161486b65a98f23acff3689"} Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.186467 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kcpcn"] Nov 23 10:12:11 crc kubenswrapper[5028]: E1123 10:12:11.188479 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="registry-server" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.188516 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="registry-server" Nov 23 10:12:11 crc kubenswrapper[5028]: E1123 10:12:11.188548 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="extract-utilities" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.188566 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="extract-utilities" Nov 23 10:12:11 crc kubenswrapper[5028]: E1123 10:12:11.188671 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="extract-content" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.188688 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="extract-content" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.189309 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="871774ed-be11-4287-92aa-ffd8b6c7cfa3" containerName="registry-server" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.193521 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.203700 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcpcn"] Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.263374 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-utilities\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.263480 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9j8t\" (UniqueName: \"kubernetes.io/projected/efe90ca7-6d86-4c21-8568-94e06c107cb0-kube-api-access-x9j8t\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.263559 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-catalog-content\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.366150 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-utilities\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.366315 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9j8t\" (UniqueName: \"kubernetes.io/projected/efe90ca7-6d86-4c21-8568-94e06c107cb0-kube-api-access-x9j8t\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.366424 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-catalog-content\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.366869 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-utilities\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.367280 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-catalog-content\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.407690 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9j8t\" (UniqueName: \"kubernetes.io/projected/efe90ca7-6d86-4c21-8568-94e06c107cb0-kube-api-access-x9j8t\") pod \"redhat-operators-kcpcn\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:11 crc kubenswrapper[5028]: I1123 10:12:11.529775 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:12 crc kubenswrapper[5028]: W1123 10:12:12.086189 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-e892a6853006727676a2ffd2c9ea3a27f47752d4ae2ec35acf0c5c48e850d32a WatchSource:0}: Error finding container e892a6853006727676a2ffd2c9ea3a27f47752d4ae2ec35acf0c5c48e850d32a: Status 404 returned error can't find the container with id e892a6853006727676a2ffd2c9ea3a27f47752d4ae2ec35acf0c5c48e850d32a Nov 23 10:12:12 crc kubenswrapper[5028]: I1123 10:12:12.091256 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kcpcn"] Nov 23 10:12:12 crc kubenswrapper[5028]: I1123 10:12:12.601423 5028 generic.go:334] "Generic (PLEG): container finished" podID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerID="a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636" exitCode=0 Nov 23 10:12:12 crc kubenswrapper[5028]: I1123 10:12:12.601513 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerDied","Data":"a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636"} Nov 23 10:12:12 crc kubenswrapper[5028]: I1123 10:12:12.601713 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerStarted","Data":"e892a6853006727676a2ffd2c9ea3a27f47752d4ae2ec35acf0c5c48e850d32a"} Nov 23 10:12:14 crc kubenswrapper[5028]: I1123 10:12:14.626724 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerStarted","Data":"621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373"} Nov 23 10:12:17 crc kubenswrapper[5028]: I1123 10:12:17.665729 5028 generic.go:334] "Generic (PLEG): container finished" podID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerID="621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373" exitCode=0 Nov 23 10:12:17 crc kubenswrapper[5028]: I1123 10:12:17.665810 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerDied","Data":"621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373"} Nov 23 10:12:18 crc kubenswrapper[5028]: I1123 10:12:18.686615 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerStarted","Data":"f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8"} Nov 23 10:12:18 crc kubenswrapper[5028]: I1123 10:12:18.734724 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kcpcn" podStartSLOduration=2.208861712 podStartE2EDuration="7.734693108s" podCreationTimestamp="2025-11-23 10:12:11 +0000 UTC" firstStartedPulling="2025-11-23 10:12:12.603366097 +0000 UTC m=+12116.300770876" lastFinishedPulling="2025-11-23 10:12:18.129197463 +0000 UTC m=+12121.826602272" observedRunningTime="2025-11-23 10:12:18.72190061 +0000 UTC m=+12122.419305409" watchObservedRunningTime="2025-11-23 10:12:18.734693108 +0000 UTC m=+12122.432097907" Nov 23 10:12:20 crc kubenswrapper[5028]: E1123 10:12:20.860038 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-conmon-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache]" Nov 23 10:12:21 crc kubenswrapper[5028]: I1123 10:12:21.530823 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:21 crc kubenswrapper[5028]: I1123 10:12:21.531238 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:22 crc kubenswrapper[5028]: I1123 10:12:22.607614 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kcpcn" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="registry-server" probeResult="failure" output=< Nov 23 10:12:22 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 10:12:22 crc kubenswrapper[5028]: > Nov 23 10:12:31 crc kubenswrapper[5028]: E1123 10:12:31.246139 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-conmon-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache]" Nov 23 10:12:31 crc kubenswrapper[5028]: I1123 10:12:31.625738 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:31 crc kubenswrapper[5028]: I1123 10:12:31.696295 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:31 crc kubenswrapper[5028]: I1123 10:12:31.874151 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcpcn"] Nov 23 10:12:32 crc kubenswrapper[5028]: I1123 10:12:32.877679 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kcpcn" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="registry-server" containerID="cri-o://f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8" gracePeriod=2 Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.445990 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.581661 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9j8t\" (UniqueName: \"kubernetes.io/projected/efe90ca7-6d86-4c21-8568-94e06c107cb0-kube-api-access-x9j8t\") pod \"efe90ca7-6d86-4c21-8568-94e06c107cb0\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.581817 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-catalog-content\") pod \"efe90ca7-6d86-4c21-8568-94e06c107cb0\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.582085 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-utilities\") pod \"efe90ca7-6d86-4c21-8568-94e06c107cb0\" (UID: \"efe90ca7-6d86-4c21-8568-94e06c107cb0\") " Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.583155 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-utilities" (OuterVolumeSpecName: "utilities") pod "efe90ca7-6d86-4c21-8568-94e06c107cb0" (UID: "efe90ca7-6d86-4c21-8568-94e06c107cb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.591016 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe90ca7-6d86-4c21-8568-94e06c107cb0-kube-api-access-x9j8t" (OuterVolumeSpecName: "kube-api-access-x9j8t") pod "efe90ca7-6d86-4c21-8568-94e06c107cb0" (UID: "efe90ca7-6d86-4c21-8568-94e06c107cb0"). InnerVolumeSpecName "kube-api-access-x9j8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.688594 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.688659 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9j8t\" (UniqueName: \"kubernetes.io/projected/efe90ca7-6d86-4c21-8568-94e06c107cb0-kube-api-access-x9j8t\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.698196 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efe90ca7-6d86-4c21-8568-94e06c107cb0" (UID: "efe90ca7-6d86-4c21-8568-94e06c107cb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.791963 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe90ca7-6d86-4c21-8568-94e06c107cb0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.889104 5028 generic.go:334] "Generic (PLEG): container finished" podID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerID="f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8" exitCode=0 Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.889195 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kcpcn" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.889222 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerDied","Data":"f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8"} Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.889629 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kcpcn" event={"ID":"efe90ca7-6d86-4c21-8568-94e06c107cb0","Type":"ContainerDied","Data":"e892a6853006727676a2ffd2c9ea3a27f47752d4ae2ec35acf0c5c48e850d32a"} Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.889653 5028 scope.go:117] "RemoveContainer" containerID="f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.936285 5028 scope.go:117] "RemoveContainer" containerID="621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373" Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.936549 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kcpcn"] Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.961633 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kcpcn"] Nov 23 10:12:33 crc kubenswrapper[5028]: I1123 10:12:33.967697 5028 scope.go:117] "RemoveContainer" containerID="a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.032022 5028 scope.go:117] "RemoveContainer" containerID="f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8" Nov 23 10:12:34 crc kubenswrapper[5028]: E1123 10:12:34.032859 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8\": container with ID starting with f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8 not found: ID does not exist" containerID="f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.032924 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8"} err="failed to get container status \"f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8\": rpc error: code = NotFound desc = could not find container \"f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8\": container with ID starting with f7f9087b3a1767a4d49276a8afa8f162ec017efac8d220d4af2407dae8e786f8 not found: ID does not exist" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.032974 5028 scope.go:117] "RemoveContainer" containerID="621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373" Nov 23 10:12:34 crc kubenswrapper[5028]: E1123 10:12:34.033803 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373\": container with ID starting with 621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373 not found: ID does not exist" containerID="621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.033845 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373"} err="failed to get container status \"621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373\": rpc error: code = NotFound desc = could not find container \"621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373\": container with ID starting with 621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373 not found: ID does not exist" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.033875 5028 scope.go:117] "RemoveContainer" containerID="a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636" Nov 23 10:12:34 crc kubenswrapper[5028]: E1123 10:12:34.034320 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636\": container with ID starting with a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636 not found: ID does not exist" containerID="a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.034376 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636"} err="failed to get container status \"a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636\": rpc error: code = NotFound desc = could not find container \"a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636\": container with ID starting with a6b7e80eebdf25eadb124671352660a0c627a5945783e1d70e4b710de7028636 not found: ID does not exist" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.899507 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r4xw2"] Nov 23 10:12:34 crc kubenswrapper[5028]: E1123 10:12:34.901364 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="extract-utilities" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.901523 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="extract-utilities" Nov 23 10:12:34 crc kubenswrapper[5028]: E1123 10:12:34.901740 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="registry-server" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.901925 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="registry-server" Nov 23 10:12:34 crc kubenswrapper[5028]: E1123 10:12:34.902156 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="extract-content" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.902291 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="extract-content" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.902836 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" containerName="registry-server" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.906425 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:34 crc kubenswrapper[5028]: I1123 10:12:34.916521 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4xw2"] Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.069746 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-utilities\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.069781 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe90ca7-6d86-4c21-8568-94e06c107cb0" path="/var/lib/kubelet/pods/efe90ca7-6d86-4c21-8568-94e06c107cb0/volumes" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.070106 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-catalog-content\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.070331 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fcxz\" (UniqueName: \"kubernetes.io/projected/708f4422-8ae5-4e22-a9c5-6d65511ddb88-kube-api-access-5fcxz\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.175495 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fcxz\" (UniqueName: \"kubernetes.io/projected/708f4422-8ae5-4e22-a9c5-6d65511ddb88-kube-api-access-5fcxz\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.175849 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-utilities\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.176512 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-catalog-content\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.178495 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-utilities\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.180934 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-catalog-content\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.200032 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fcxz\" (UniqueName: \"kubernetes.io/projected/708f4422-8ae5-4e22-a9c5-6d65511ddb88-kube-api-access-5fcxz\") pod \"certified-operators-r4xw2\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.282173 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.812923 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r4xw2"] Nov 23 10:12:35 crc kubenswrapper[5028]: I1123 10:12:35.923031 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerStarted","Data":"85e0d4874b45a2f4db9ddedd81f4be51e5408f2146660bf1f5af059702adf894"} Nov 23 10:12:36 crc kubenswrapper[5028]: I1123 10:12:36.943108 5028 generic.go:334] "Generic (PLEG): container finished" podID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerID="e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd" exitCode=0 Nov 23 10:12:36 crc kubenswrapper[5028]: I1123 10:12:36.943205 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerDied","Data":"e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd"} Nov 23 10:12:37 crc kubenswrapper[5028]: I1123 10:12:37.956374 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerStarted","Data":"37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07"} Nov 23 10:12:40 crc kubenswrapper[5028]: I1123 10:12:40.015693 5028 generic.go:334] "Generic (PLEG): container finished" podID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerID="37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07" exitCode=0 Nov 23 10:12:40 crc kubenswrapper[5028]: I1123 10:12:40.015757 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerDied","Data":"37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07"} Nov 23 10:12:41 crc kubenswrapper[5028]: I1123 10:12:41.031383 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerStarted","Data":"9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5"} Nov 23 10:12:41 crc kubenswrapper[5028]: I1123 10:12:41.067416 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r4xw2" podStartSLOduration=3.5938578249999997 podStartE2EDuration="7.067383016s" podCreationTimestamp="2025-11-23 10:12:34 +0000 UTC" firstStartedPulling="2025-11-23 10:12:36.947356663 +0000 UTC m=+12140.644761472" lastFinishedPulling="2025-11-23 10:12:40.420881844 +0000 UTC m=+12144.118286663" observedRunningTime="2025-11-23 10:12:41.063700764 +0000 UTC m=+12144.761105563" watchObservedRunningTime="2025-11-23 10:12:41.067383016 +0000 UTC m=+12144.764787805" Nov 23 10:12:41 crc kubenswrapper[5028]: E1123 10:12:41.553320 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-conmon-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache]" Nov 23 10:12:45 crc kubenswrapper[5028]: I1123 10:12:45.283292 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:45 crc kubenswrapper[5028]: I1123 10:12:45.283877 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:45 crc kubenswrapper[5028]: I1123 10:12:45.343892 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:46 crc kubenswrapper[5028]: I1123 10:12:46.156447 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:46 crc kubenswrapper[5028]: I1123 10:12:46.237255 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4xw2"] Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.120493 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r4xw2" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="registry-server" containerID="cri-o://9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5" gracePeriod=2 Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.778513 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.835859 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-catalog-content\") pod \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.836180 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fcxz\" (UniqueName: \"kubernetes.io/projected/708f4422-8ae5-4e22-a9c5-6d65511ddb88-kube-api-access-5fcxz\") pod \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.836321 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-utilities\") pod \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\" (UID: \"708f4422-8ae5-4e22-a9c5-6d65511ddb88\") " Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.837295 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-utilities" (OuterVolumeSpecName: "utilities") pod "708f4422-8ae5-4e22-a9c5-6d65511ddb88" (UID: "708f4422-8ae5-4e22-a9c5-6d65511ddb88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.838249 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.844320 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708f4422-8ae5-4e22-a9c5-6d65511ddb88-kube-api-access-5fcxz" (OuterVolumeSpecName: "kube-api-access-5fcxz") pod "708f4422-8ae5-4e22-a9c5-6d65511ddb88" (UID: "708f4422-8ae5-4e22-a9c5-6d65511ddb88"). InnerVolumeSpecName "kube-api-access-5fcxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.917024 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "708f4422-8ae5-4e22-a9c5-6d65511ddb88" (UID: "708f4422-8ae5-4e22-a9c5-6d65511ddb88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.940387 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/708f4422-8ae5-4e22-a9c5-6d65511ddb88-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:48 crc kubenswrapper[5028]: I1123 10:12:48.940429 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fcxz\" (UniqueName: \"kubernetes.io/projected/708f4422-8ae5-4e22-a9c5-6d65511ddb88-kube-api-access-5fcxz\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.132224 5028 generic.go:334] "Generic (PLEG): container finished" podID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerID="9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5" exitCode=0 Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.132272 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerDied","Data":"9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5"} Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.132313 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r4xw2" event={"ID":"708f4422-8ae5-4e22-a9c5-6d65511ddb88","Type":"ContainerDied","Data":"85e0d4874b45a2f4db9ddedd81f4be51e5408f2146660bf1f5af059702adf894"} Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.132334 5028 scope.go:117] "RemoveContainer" containerID="9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.133600 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r4xw2" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.170022 5028 scope.go:117] "RemoveContainer" containerID="37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.195903 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r4xw2"] Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.201753 5028 scope.go:117] "RemoveContainer" containerID="e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.217503 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r4xw2"] Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.242110 5028 scope.go:117] "RemoveContainer" containerID="9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5" Nov 23 10:12:49 crc kubenswrapper[5028]: E1123 10:12:49.242542 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5\": container with ID starting with 9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5 not found: ID does not exist" containerID="9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.242575 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5"} err="failed to get container status \"9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5\": rpc error: code = NotFound desc = could not find container \"9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5\": container with ID starting with 9ec98e11c2314631cf4a2903bf2af782b20789565b11f7e6cc9a7f07c7479db5 not found: ID does not exist" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.242600 5028 scope.go:117] "RemoveContainer" containerID="37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07" Nov 23 10:12:49 crc kubenswrapper[5028]: E1123 10:12:49.242834 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07\": container with ID starting with 37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07 not found: ID does not exist" containerID="37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.242859 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07"} err="failed to get container status \"37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07\": rpc error: code = NotFound desc = could not find container \"37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07\": container with ID starting with 37e5f4356c5ac7878b4f97a7fbae18787b8788f303a3b117f4ed6a8b962adc07 not found: ID does not exist" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.242875 5028 scope.go:117] "RemoveContainer" containerID="e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd" Nov 23 10:12:49 crc kubenswrapper[5028]: E1123 10:12:49.243107 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd\": container with ID starting with e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd not found: ID does not exist" containerID="e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd" Nov 23 10:12:49 crc kubenswrapper[5028]: I1123 10:12:49.243129 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd"} err="failed to get container status \"e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd\": rpc error: code = NotFound desc = could not find container \"e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd\": container with ID starting with e8992e5a1c51c400c41b350285721e4a7db791c6b05e4c05e66d09fe9ec910fd not found: ID does not exist" Nov 23 10:12:51 crc kubenswrapper[5028]: I1123 10:12:51.074703 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" path="/var/lib/kubelet/pods/708f4422-8ae5-4e22-a9c5-6d65511ddb88/volumes" Nov 23 10:12:51 crc kubenswrapper[5028]: I1123 10:12:51.169114 5028 generic.go:334] "Generic (PLEG): container finished" podID="621da467-543c-4ecf-80cc-fa2bb98d7a68" containerID="c6f46c437b5dbee299b078ec01d6e8fcfdf1d6e4dbda64adce9c76adce05ea96" exitCode=0 Nov 23 10:12:51 crc kubenswrapper[5028]: I1123 10:12:51.169189 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"621da467-543c-4ecf-80cc-fa2bb98d7a68","Type":"ContainerDied","Data":"c6f46c437b5dbee299b078ec01d6e8fcfdf1d6e4dbda64adce9c76adce05ea96"} Nov 23 10:12:51 crc kubenswrapper[5028]: E1123 10:12:51.913098 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-conmon-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache]" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.781019 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845287 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-workdir\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845374 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5drj\" (UniqueName: \"kubernetes.io/projected/621da467-543c-4ecf-80cc-fa2bb98d7a68-kube-api-access-g5drj\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845469 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845533 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ca-certs\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845585 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-config-data\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845609 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845683 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-temporary\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845729 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config-secret\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.845764 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ssh-key\") pod \"621da467-543c-4ecf-80cc-fa2bb98d7a68\" (UID: \"621da467-543c-4ecf-80cc-fa2bb98d7a68\") " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.847452 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-config-data" (OuterVolumeSpecName: "config-data") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.852137 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621da467-543c-4ecf-80cc-fa2bb98d7a68-kube-api-access-g5drj" (OuterVolumeSpecName: "kube-api-access-g5drj") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "kube-api-access-g5drj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.852281 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.861699 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.863294 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.886518 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.889747 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.902851 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.903753 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "621da467-543c-4ecf-80cc-fa2bb98d7a68" (UID: "621da467-543c-4ecf-80cc-fa2bb98d7a68"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947465 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5drj\" (UniqueName: \"kubernetes.io/projected/621da467-543c-4ecf-80cc-fa2bb98d7a68-kube-api-access-g5drj\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947497 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947522 5028 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947533 5028 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/621da467-543c-4ecf-80cc-fa2bb98d7a68-config-data\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947568 5028 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947578 5028 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947628 5028 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947637 5028 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/621da467-543c-4ecf-80cc-fa2bb98d7a68-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.947646 5028 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/621da467-543c-4ecf-80cc-fa2bb98d7a68-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:52 crc kubenswrapper[5028]: I1123 10:12:52.972215 5028 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 23 10:12:53 crc kubenswrapper[5028]: I1123 10:12:53.050366 5028 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 23 10:12:53 crc kubenswrapper[5028]: I1123 10:12:53.199619 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"621da467-543c-4ecf-80cc-fa2bb98d7a68","Type":"ContainerDied","Data":"4707f7d39091a7116683d3c97a6b659305bb23b14aca695f6c63e308f8f03167"} Nov 23 10:12:53 crc kubenswrapper[5028]: I1123 10:12:53.199667 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4707f7d39091a7116683d3c97a6b659305bb23b14aca695f6c63e308f8f03167" Nov 23 10:12:53 crc kubenswrapper[5028]: I1123 10:12:53.199782 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.580916 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 10:13:01 crc kubenswrapper[5028]: E1123 10:13:01.584421 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="extract-utilities" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.584648 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="extract-utilities" Nov 23 10:13:01 crc kubenswrapper[5028]: E1123 10:13:01.584807 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621da467-543c-4ecf-80cc-fa2bb98d7a68" containerName="tempest-tests-tempest-tests-runner" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.584984 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="621da467-543c-4ecf-80cc-fa2bb98d7a68" containerName="tempest-tests-tempest-tests-runner" Nov 23 10:13:01 crc kubenswrapper[5028]: E1123 10:13:01.585683 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="registry-server" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.585823 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="registry-server" Nov 23 10:13:01 crc kubenswrapper[5028]: E1123 10:13:01.586055 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="extract-content" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.586328 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="extract-content" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.586927 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="621da467-543c-4ecf-80cc-fa2bb98d7a68" containerName="tempest-tests-tempest-tests-runner" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.587210 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="708f4422-8ae5-4e22-a9c5-6d65511ddb88" containerName="registry-server" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.589008 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.593318 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jz44m" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.601771 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.677904 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cghq\" (UniqueName: \"kubernetes.io/projected/fb1317be-089f-4970-a1e6-aeeef05af72b-kube-api-access-4cghq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.678053 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.781366 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cghq\" (UniqueName: \"kubernetes.io/projected/fb1317be-089f-4970-a1e6-aeeef05af72b-kube-api-access-4cghq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.781488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.782173 5028 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.823660 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cghq\" (UniqueName: \"kubernetes.io/projected/fb1317be-089f-4970-a1e6-aeeef05af72b-kube-api-access-4cghq\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.825593 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"fb1317be-089f-4970-a1e6-aeeef05af72b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:01 crc kubenswrapper[5028]: I1123 10:13:01.922519 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 23 10:13:02 crc kubenswrapper[5028]: E1123 10:13:02.208504 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-conmon-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache]" Nov 23 10:13:02 crc kubenswrapper[5028]: I1123 10:13:02.483112 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 23 10:13:03 crc kubenswrapper[5028]: I1123 10:13:03.336119 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fb1317be-089f-4970-a1e6-aeeef05af72b","Type":"ContainerStarted","Data":"32cc662fed055a67c9a6e8092876347922e37d6d35f2ec886cdc391e64bb7c20"} Nov 23 10:13:04 crc kubenswrapper[5028]: I1123 10:13:04.348749 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"fb1317be-089f-4970-a1e6-aeeef05af72b","Type":"ContainerStarted","Data":"70f2e1a7e2491ed793d50710e88175c047daf4a56d4f76609dd5dbea40dd5381"} Nov 23 10:13:04 crc kubenswrapper[5028]: I1123 10:13:04.374247 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.192872713 podStartE2EDuration="3.374214305s" podCreationTimestamp="2025-11-23 10:13:01 +0000 UTC" firstStartedPulling="2025-11-23 10:13:02.499189079 +0000 UTC m=+12166.196593868" lastFinishedPulling="2025-11-23 10:13:03.680530681 +0000 UTC m=+12167.377935460" observedRunningTime="2025-11-23 10:13:04.364493853 +0000 UTC m=+12168.061898642" watchObservedRunningTime="2025-11-23 10:13:04.374214305 +0000 UTC m=+12168.071619084" Nov 23 10:13:12 crc kubenswrapper[5028]: E1123 10:13:12.470680 5028 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-conmon-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe90ca7_6d86_4c21_8568_94e06c107cb0.slice/crio-621af9ef4d623497cfe327565b34212cca10cb5b20e546425b21ebcfe463e373.scope\": RecentStats: unable to find data in memory cache]" Nov 23 10:14:00 crc kubenswrapper[5028]: I1123 10:14:00.946836 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:14:00 crc kubenswrapper[5028]: I1123 10:14:00.947773 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.828246 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hmzt4/must-gather-d29vb"] Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.831789 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.834902 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hmzt4"/"default-dockercfg-prqqk" Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.835351 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hmzt4"/"openshift-service-ca.crt" Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.835969 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hmzt4"/"kube-root-ca.crt" Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.900938 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hmzt4/must-gather-d29vb"] Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.944393 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/194cef00-7b28-406f-a920-f47a965d5f6e-must-gather-output\") pod \"must-gather-d29vb\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:23 crc kubenswrapper[5028]: I1123 10:14:23.944473 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lqm\" (UniqueName: \"kubernetes.io/projected/194cef00-7b28-406f-a920-f47a965d5f6e-kube-api-access-b5lqm\") pod \"must-gather-d29vb\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:24 crc kubenswrapper[5028]: I1123 10:14:24.046293 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/194cef00-7b28-406f-a920-f47a965d5f6e-must-gather-output\") pod \"must-gather-d29vb\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:24 crc kubenswrapper[5028]: I1123 10:14:24.046379 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lqm\" (UniqueName: \"kubernetes.io/projected/194cef00-7b28-406f-a920-f47a965d5f6e-kube-api-access-b5lqm\") pod \"must-gather-d29vb\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:24 crc kubenswrapper[5028]: I1123 10:14:24.047182 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/194cef00-7b28-406f-a920-f47a965d5f6e-must-gather-output\") pod \"must-gather-d29vb\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:24 crc kubenswrapper[5028]: I1123 10:14:24.069739 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lqm\" (UniqueName: \"kubernetes.io/projected/194cef00-7b28-406f-a920-f47a965d5f6e-kube-api-access-b5lqm\") pod \"must-gather-d29vb\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:24 crc kubenswrapper[5028]: I1123 10:14:24.154206 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:14:25 crc kubenswrapper[5028]: I1123 10:14:24.724654 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hmzt4/must-gather-d29vb"] Nov 23 10:14:25 crc kubenswrapper[5028]: I1123 10:14:24.727990 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 10:14:25 crc kubenswrapper[5028]: I1123 10:14:25.508448 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/must-gather-d29vb" event={"ID":"194cef00-7b28-406f-a920-f47a965d5f6e","Type":"ContainerStarted","Data":"05706a9da088b657408cef77e4f8e9fc78e8a6eb91b8566a026e450b8f17c96a"} Nov 23 10:14:30 crc kubenswrapper[5028]: I1123 10:14:30.946513 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:14:30 crc kubenswrapper[5028]: I1123 10:14:30.947231 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:14:32 crc kubenswrapper[5028]: I1123 10:14:32.603973 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/must-gather-d29vb" event={"ID":"194cef00-7b28-406f-a920-f47a965d5f6e","Type":"ContainerStarted","Data":"f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94"} Nov 23 10:14:33 crc kubenswrapper[5028]: I1123 10:14:33.622586 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/must-gather-d29vb" event={"ID":"194cef00-7b28-406f-a920-f47a965d5f6e","Type":"ContainerStarted","Data":"1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a"} Nov 23 10:14:33 crc kubenswrapper[5028]: I1123 10:14:33.657236 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hmzt4/must-gather-d29vb" podStartSLOduration=3.4830757119999998 podStartE2EDuration="10.65720015s" podCreationTimestamp="2025-11-23 10:14:23 +0000 UTC" firstStartedPulling="2025-11-23 10:14:24.727581182 +0000 UTC m=+12248.424985961" lastFinishedPulling="2025-11-23 10:14:31.90170558 +0000 UTC m=+12255.599110399" observedRunningTime="2025-11-23 10:14:33.642666879 +0000 UTC m=+12257.340071688" watchObservedRunningTime="2025-11-23 10:14:33.65720015 +0000 UTC m=+12257.354604969" Nov 23 10:14:37 crc kubenswrapper[5028]: E1123 10:14:37.644277 5028 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.145:50454->38.102.83.145:39767: write tcp 38.102.83.145:50454->38.102.83.145:39767: write: connection reset by peer Nov 23 10:14:38 crc kubenswrapper[5028]: I1123 10:14:38.818128 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-wsgvh"] Nov 23 10:14:38 crc kubenswrapper[5028]: I1123 10:14:38.822185 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:38 crc kubenswrapper[5028]: I1123 10:14:38.987815 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx6mn\" (UniqueName: \"kubernetes.io/projected/774fc92c-10b9-449d-90a5-f0152b6850ff-kube-api-access-nx6mn\") pod \"crc-debug-wsgvh\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:38 crc kubenswrapper[5028]: I1123 10:14:38.987876 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/774fc92c-10b9-449d-90a5-f0152b6850ff-host\") pod \"crc-debug-wsgvh\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:39 crc kubenswrapper[5028]: I1123 10:14:39.090260 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/774fc92c-10b9-449d-90a5-f0152b6850ff-host\") pod \"crc-debug-wsgvh\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:39 crc kubenswrapper[5028]: I1123 10:14:39.090331 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx6mn\" (UniqueName: \"kubernetes.io/projected/774fc92c-10b9-449d-90a5-f0152b6850ff-kube-api-access-nx6mn\") pod \"crc-debug-wsgvh\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:39 crc kubenswrapper[5028]: I1123 10:14:39.090883 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/774fc92c-10b9-449d-90a5-f0152b6850ff-host\") pod \"crc-debug-wsgvh\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:39 crc kubenswrapper[5028]: I1123 10:14:39.123354 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx6mn\" (UniqueName: \"kubernetes.io/projected/774fc92c-10b9-449d-90a5-f0152b6850ff-kube-api-access-nx6mn\") pod \"crc-debug-wsgvh\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:39 crc kubenswrapper[5028]: I1123 10:14:39.147643 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:14:39 crc kubenswrapper[5028]: I1123 10:14:39.694905 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" event={"ID":"774fc92c-10b9-449d-90a5-f0152b6850ff","Type":"ContainerStarted","Data":"32f0e5c956bd82e8e10918754f835088022f47a12406a1437b89ce26608974e6"} Nov 23 10:14:49 crc kubenswrapper[5028]: I1123 10:14:49.849777 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" event={"ID":"774fc92c-10b9-449d-90a5-f0152b6850ff","Type":"ContainerStarted","Data":"9b66b8454550cd63fa69973ffdc0d362119251d4c11165b01cbf12ab05860991"} Nov 23 10:14:49 crc kubenswrapper[5028]: I1123 10:14:49.871295 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" podStartSLOduration=2.349134383 podStartE2EDuration="11.871273132s" podCreationTimestamp="2025-11-23 10:14:38 +0000 UTC" firstStartedPulling="2025-11-23 10:14:39.191288622 +0000 UTC m=+12262.888693401" lastFinishedPulling="2025-11-23 10:14:48.713427371 +0000 UTC m=+12272.410832150" observedRunningTime="2025-11-23 10:14:49.866862392 +0000 UTC m=+12273.564267171" watchObservedRunningTime="2025-11-23 10:14:49.871273132 +0000 UTC m=+12273.568677911" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.176166 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm"] Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.181235 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.185185 5028 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.185613 5028 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.212161 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm"] Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.360872 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fc6076-1566-481c-a267-c6a6970ecd39-config-volume\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.361248 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fc6076-1566-481c-a267-c6a6970ecd39-secret-volume\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.361769 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76spm\" (UniqueName: \"kubernetes.io/projected/e5fc6076-1566-481c-a267-c6a6970ecd39-kube-api-access-76spm\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.463819 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fc6076-1566-481c-a267-c6a6970ecd39-secret-volume\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.464023 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76spm\" (UniqueName: \"kubernetes.io/projected/e5fc6076-1566-481c-a267-c6a6970ecd39-kube-api-access-76spm\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.464077 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fc6076-1566-481c-a267-c6a6970ecd39-config-volume\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.465035 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fc6076-1566-481c-a267-c6a6970ecd39-config-volume\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.470998 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fc6076-1566-481c-a267-c6a6970ecd39-secret-volume\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.486292 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76spm\" (UniqueName: \"kubernetes.io/projected/e5fc6076-1566-481c-a267-c6a6970ecd39-kube-api-access-76spm\") pod \"collect-profiles-29398215-j88jm\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.531691 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.947456 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.948064 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.948235 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.949502 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbae17c4db9708f07496e8e8f9ccd88b3899177fc161486b65a98f23acff3689"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 10:15:00 crc kubenswrapper[5028]: I1123 10:15:00.949615 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://fbae17c4db9708f07496e8e8f9ccd88b3899177fc161486b65a98f23acff3689" gracePeriod=600 Nov 23 10:15:01 crc kubenswrapper[5028]: I1123 10:15:01.102503 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm"] Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.033287 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="fbae17c4db9708f07496e8e8f9ccd88b3899177fc161486b65a98f23acff3689" exitCode=0 Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.033378 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"fbae17c4db9708f07496e8e8f9ccd88b3899177fc161486b65a98f23acff3689"} Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.034004 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788"} Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.034033 5028 scope.go:117] "RemoveContainer" containerID="110ddf79aed11051d6e8f902576f5d509072980c0e5f27f3c626d4a57e84eb3b" Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.040582 5028 generic.go:334] "Generic (PLEG): container finished" podID="e5fc6076-1566-481c-a267-c6a6970ecd39" containerID="c982f2d4aeacd1123e15dbd63fcb4689bb9eb3c9bab3ec7e0ce0f4fcd7d2aa8e" exitCode=0 Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.040636 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" event={"ID":"e5fc6076-1566-481c-a267-c6a6970ecd39","Type":"ContainerDied","Data":"c982f2d4aeacd1123e15dbd63fcb4689bb9eb3c9bab3ec7e0ce0f4fcd7d2aa8e"} Nov 23 10:15:02 crc kubenswrapper[5028]: I1123 10:15:02.040668 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" event={"ID":"e5fc6076-1566-481c-a267-c6a6970ecd39","Type":"ContainerStarted","Data":"65791c9f63d565c468720729f94ab2bd2cd781bea11a3b01af755f1438389ef9"} Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.068243 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" event={"ID":"e5fc6076-1566-481c-a267-c6a6970ecd39","Type":"ContainerDied","Data":"65791c9f63d565c468720729f94ab2bd2cd781bea11a3b01af755f1438389ef9"} Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.068608 5028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65791c9f63d565c468720729f94ab2bd2cd781bea11a3b01af755f1438389ef9" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.132162 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.257080 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fc6076-1566-481c-a267-c6a6970ecd39-secret-volume\") pod \"e5fc6076-1566-481c-a267-c6a6970ecd39\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.257876 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76spm\" (UniqueName: \"kubernetes.io/projected/e5fc6076-1566-481c-a267-c6a6970ecd39-kube-api-access-76spm\") pod \"e5fc6076-1566-481c-a267-c6a6970ecd39\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.258032 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fc6076-1566-481c-a267-c6a6970ecd39-config-volume\") pod \"e5fc6076-1566-481c-a267-c6a6970ecd39\" (UID: \"e5fc6076-1566-481c-a267-c6a6970ecd39\") " Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.259096 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fc6076-1566-481c-a267-c6a6970ecd39-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5fc6076-1566-481c-a267-c6a6970ecd39" (UID: "e5fc6076-1566-481c-a267-c6a6970ecd39"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.260043 5028 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5fc6076-1566-481c-a267-c6a6970ecd39-config-volume\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.266790 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fc6076-1566-481c-a267-c6a6970ecd39-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5fc6076-1566-481c-a267-c6a6970ecd39" (UID: "e5fc6076-1566-481c-a267-c6a6970ecd39"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.274175 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fc6076-1566-481c-a267-c6a6970ecd39-kube-api-access-76spm" (OuterVolumeSpecName: "kube-api-access-76spm") pod "e5fc6076-1566-481c-a267-c6a6970ecd39" (UID: "e5fc6076-1566-481c-a267-c6a6970ecd39"). InnerVolumeSpecName "kube-api-access-76spm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.362144 5028 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5fc6076-1566-481c-a267-c6a6970ecd39-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:04 crc kubenswrapper[5028]: I1123 10:15:04.362183 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76spm\" (UniqueName: \"kubernetes.io/projected/e5fc6076-1566-481c-a267-c6a6970ecd39-kube-api-access-76spm\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:05 crc kubenswrapper[5028]: I1123 10:15:05.086026 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29398215-j88jm" Nov 23 10:15:05 crc kubenswrapper[5028]: I1123 10:15:05.241056 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2"] Nov 23 10:15:05 crc kubenswrapper[5028]: I1123 10:15:05.253088 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29398170-2j7d2"] Nov 23 10:15:07 crc kubenswrapper[5028]: I1123 10:15:07.092072 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd4f83ee-b756-4bec-be49-a05b0efd2ea1" path="/var/lib/kubelet/pods/bd4f83ee-b756-4bec-be49-a05b0efd2ea1/volumes" Nov 23 10:15:33 crc kubenswrapper[5028]: I1123 10:15:33.475037 5028 generic.go:334] "Generic (PLEG): container finished" podID="774fc92c-10b9-449d-90a5-f0152b6850ff" containerID="9b66b8454550cd63fa69973ffdc0d362119251d4c11165b01cbf12ab05860991" exitCode=0 Nov 23 10:15:33 crc kubenswrapper[5028]: I1123 10:15:33.475134 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" event={"ID":"774fc92c-10b9-449d-90a5-f0152b6850ff","Type":"ContainerDied","Data":"9b66b8454550cd63fa69973ffdc0d362119251d4c11165b01cbf12ab05860991"} Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.629473 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.671597 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-wsgvh"] Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.672814 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/774fc92c-10b9-449d-90a5-f0152b6850ff-host\") pod \"774fc92c-10b9-449d-90a5-f0152b6850ff\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.673183 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx6mn\" (UniqueName: \"kubernetes.io/projected/774fc92c-10b9-449d-90a5-f0152b6850ff-kube-api-access-nx6mn\") pod \"774fc92c-10b9-449d-90a5-f0152b6850ff\" (UID: \"774fc92c-10b9-449d-90a5-f0152b6850ff\") " Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.673781 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774fc92c-10b9-449d-90a5-f0152b6850ff-host" (OuterVolumeSpecName: "host") pod "774fc92c-10b9-449d-90a5-f0152b6850ff" (UID: "774fc92c-10b9-449d-90a5-f0152b6850ff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.680285 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-wsgvh"] Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.684573 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774fc92c-10b9-449d-90a5-f0152b6850ff-kube-api-access-nx6mn" (OuterVolumeSpecName: "kube-api-access-nx6mn") pod "774fc92c-10b9-449d-90a5-f0152b6850ff" (UID: "774fc92c-10b9-449d-90a5-f0152b6850ff"). InnerVolumeSpecName "kube-api-access-nx6mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.775706 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx6mn\" (UniqueName: \"kubernetes.io/projected/774fc92c-10b9-449d-90a5-f0152b6850ff-kube-api-access-nx6mn\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.775740 5028 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/774fc92c-10b9-449d-90a5-f0152b6850ff-host\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:34 crc kubenswrapper[5028]: I1123 10:15:34.942592 5028 scope.go:117] "RemoveContainer" containerID="9a15b30dbb1ce294503e9b0a957af98cf05812b9a12f88a58eb42e50f5b21546" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.071465 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774fc92c-10b9-449d-90a5-f0152b6850ff" path="/var/lib/kubelet/pods/774fc92c-10b9-449d-90a5-f0152b6850ff/volumes" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.508901 5028 scope.go:117] "RemoveContainer" containerID="9b66b8454550cd63fa69973ffdc0d362119251d4c11165b01cbf12ab05860991" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.508989 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-wsgvh" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.993095 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-pshxb"] Nov 23 10:15:35 crc kubenswrapper[5028]: E1123 10:15:35.994030 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774fc92c-10b9-449d-90a5-f0152b6850ff" containerName="container-00" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.994051 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="774fc92c-10b9-449d-90a5-f0152b6850ff" containerName="container-00" Nov 23 10:15:35 crc kubenswrapper[5028]: E1123 10:15:35.994091 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fc6076-1566-481c-a267-c6a6970ecd39" containerName="collect-profiles" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.994099 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fc6076-1566-481c-a267-c6a6970ecd39" containerName="collect-profiles" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.994371 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="774fc92c-10b9-449d-90a5-f0152b6850ff" containerName="container-00" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.994393 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fc6076-1566-481c-a267-c6a6970ecd39" containerName="collect-profiles" Nov 23 10:15:35 crc kubenswrapper[5028]: I1123 10:15:35.995272 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.027047 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15e3cd77-581f-47d7-a81b-51b845925ff4-host\") pod \"crc-debug-pshxb\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.027684 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9l2\" (UniqueName: \"kubernetes.io/projected/15e3cd77-581f-47d7-a81b-51b845925ff4-kube-api-access-lx9l2\") pod \"crc-debug-pshxb\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.129982 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9l2\" (UniqueName: \"kubernetes.io/projected/15e3cd77-581f-47d7-a81b-51b845925ff4-kube-api-access-lx9l2\") pod \"crc-debug-pshxb\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.130077 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15e3cd77-581f-47d7-a81b-51b845925ff4-host\") pod \"crc-debug-pshxb\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.130773 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15e3cd77-581f-47d7-a81b-51b845925ff4-host\") pod \"crc-debug-pshxb\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.155642 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9l2\" (UniqueName: \"kubernetes.io/projected/15e3cd77-581f-47d7-a81b-51b845925ff4-kube-api-access-lx9l2\") pod \"crc-debug-pshxb\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.312820 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:36 crc kubenswrapper[5028]: I1123 10:15:36.526193 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-pshxb" event={"ID":"15e3cd77-581f-47d7-a81b-51b845925ff4","Type":"ContainerStarted","Data":"f192ce4f3a4bdc1b44687f000e2c9eeb1cf8b09aa1a7b9eba65ae04adde95a8b"} Nov 23 10:15:37 crc kubenswrapper[5028]: I1123 10:15:37.546878 5028 generic.go:334] "Generic (PLEG): container finished" podID="15e3cd77-581f-47d7-a81b-51b845925ff4" containerID="e2518d599ba8fbee89a1385f521071e0e539f2e4100e12c8a0677932a5b1e77e" exitCode=0 Nov 23 10:15:37 crc kubenswrapper[5028]: I1123 10:15:37.547064 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-pshxb" event={"ID":"15e3cd77-581f-47d7-a81b-51b845925ff4","Type":"ContainerDied","Data":"e2518d599ba8fbee89a1385f521071e0e539f2e4100e12c8a0677932a5b1e77e"} Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.263045 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-pshxb"] Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.276319 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-pshxb"] Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.674133 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.687403 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx9l2\" (UniqueName: \"kubernetes.io/projected/15e3cd77-581f-47d7-a81b-51b845925ff4-kube-api-access-lx9l2\") pod \"15e3cd77-581f-47d7-a81b-51b845925ff4\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.687501 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15e3cd77-581f-47d7-a81b-51b845925ff4-host\") pod \"15e3cd77-581f-47d7-a81b-51b845925ff4\" (UID: \"15e3cd77-581f-47d7-a81b-51b845925ff4\") " Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.687620 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15e3cd77-581f-47d7-a81b-51b845925ff4-host" (OuterVolumeSpecName: "host") pod "15e3cd77-581f-47d7-a81b-51b845925ff4" (UID: "15e3cd77-581f-47d7-a81b-51b845925ff4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.688518 5028 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15e3cd77-581f-47d7-a81b-51b845925ff4-host\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.698410 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e3cd77-581f-47d7-a81b-51b845925ff4-kube-api-access-lx9l2" (OuterVolumeSpecName: "kube-api-access-lx9l2") pod "15e3cd77-581f-47d7-a81b-51b845925ff4" (UID: "15e3cd77-581f-47d7-a81b-51b845925ff4"). InnerVolumeSpecName "kube-api-access-lx9l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:15:38 crc kubenswrapper[5028]: I1123 10:15:38.792914 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx9l2\" (UniqueName: \"kubernetes.io/projected/15e3cd77-581f-47d7-a81b-51b845925ff4-kube-api-access-lx9l2\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.072445 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15e3cd77-581f-47d7-a81b-51b845925ff4" path="/var/lib/kubelet/pods/15e3cd77-581f-47d7-a81b-51b845925ff4/volumes" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.483541 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-mntkv"] Nov 23 10:15:39 crc kubenswrapper[5028]: E1123 10:15:39.484595 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15e3cd77-581f-47d7-a81b-51b845925ff4" containerName="container-00" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.484622 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="15e3cd77-581f-47d7-a81b-51b845925ff4" containerName="container-00" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.484904 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="15e3cd77-581f-47d7-a81b-51b845925ff4" containerName="container-00" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.488501 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.512550 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ztv\" (UniqueName: \"kubernetes.io/projected/c35c66dc-79dc-4317-85d9-f31de062a230-kube-api-access-p8ztv\") pod \"crc-debug-mntkv\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.512649 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35c66dc-79dc-4317-85d9-f31de062a230-host\") pod \"crc-debug-mntkv\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.581715 5028 scope.go:117] "RemoveContainer" containerID="e2518d599ba8fbee89a1385f521071e0e539f2e4100e12c8a0677932a5b1e77e" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.581916 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-pshxb" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.615543 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8ztv\" (UniqueName: \"kubernetes.io/projected/c35c66dc-79dc-4317-85d9-f31de062a230-kube-api-access-p8ztv\") pod \"crc-debug-mntkv\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.615643 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35c66dc-79dc-4317-85d9-f31de062a230-host\") pod \"crc-debug-mntkv\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.615882 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35c66dc-79dc-4317-85d9-f31de062a230-host\") pod \"crc-debug-mntkv\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.647144 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8ztv\" (UniqueName: \"kubernetes.io/projected/c35c66dc-79dc-4317-85d9-f31de062a230-kube-api-access-p8ztv\") pod \"crc-debug-mntkv\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:39 crc kubenswrapper[5028]: I1123 10:15:39.827494 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:40 crc kubenswrapper[5028]: I1123 10:15:40.595513 5028 generic.go:334] "Generic (PLEG): container finished" podID="c35c66dc-79dc-4317-85d9-f31de062a230" containerID="7bceceeda37b278c5fbd587d18b945579078f4e8510fff0ab42a01b6058aa82e" exitCode=0 Nov 23 10:15:40 crc kubenswrapper[5028]: I1123 10:15:40.595561 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-mntkv" event={"ID":"c35c66dc-79dc-4317-85d9-f31de062a230","Type":"ContainerDied","Data":"7bceceeda37b278c5fbd587d18b945579078f4e8510fff0ab42a01b6058aa82e"} Nov 23 10:15:40 crc kubenswrapper[5028]: I1123 10:15:40.596006 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/crc-debug-mntkv" event={"ID":"c35c66dc-79dc-4317-85d9-f31de062a230","Type":"ContainerStarted","Data":"51fc3a3ef77ff53d33801b7cdc7d7189e21989d430597d5651e954020650d8e5"} Nov 23 10:15:40 crc kubenswrapper[5028]: I1123 10:15:40.649222 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-mntkv"] Nov 23 10:15:40 crc kubenswrapper[5028]: I1123 10:15:40.664491 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hmzt4/crc-debug-mntkv"] Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.736047 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.770932 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35c66dc-79dc-4317-85d9-f31de062a230-host\") pod \"c35c66dc-79dc-4317-85d9-f31de062a230\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.771062 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c35c66dc-79dc-4317-85d9-f31de062a230-host" (OuterVolumeSpecName: "host") pod "c35c66dc-79dc-4317-85d9-f31de062a230" (UID: "c35c66dc-79dc-4317-85d9-f31de062a230"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.771217 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8ztv\" (UniqueName: \"kubernetes.io/projected/c35c66dc-79dc-4317-85d9-f31de062a230-kube-api-access-p8ztv\") pod \"c35c66dc-79dc-4317-85d9-f31de062a230\" (UID: \"c35c66dc-79dc-4317-85d9-f31de062a230\") " Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.772132 5028 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35c66dc-79dc-4317-85d9-f31de062a230-host\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.779584 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35c66dc-79dc-4317-85d9-f31de062a230-kube-api-access-p8ztv" (OuterVolumeSpecName: "kube-api-access-p8ztv") pod "c35c66dc-79dc-4317-85d9-f31de062a230" (UID: "c35c66dc-79dc-4317-85d9-f31de062a230"). InnerVolumeSpecName "kube-api-access-p8ztv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:15:41 crc kubenswrapper[5028]: I1123 10:15:41.874261 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8ztv\" (UniqueName: \"kubernetes.io/projected/c35c66dc-79dc-4317-85d9-f31de062a230-kube-api-access-p8ztv\") on node \"crc\" DevicePath \"\"" Nov 23 10:15:42 crc kubenswrapper[5028]: I1123 10:15:42.625601 5028 scope.go:117] "RemoveContainer" containerID="7bceceeda37b278c5fbd587d18b945579078f4e8510fff0ab42a01b6058aa82e" Nov 23 10:15:42 crc kubenswrapper[5028]: I1123 10:15:42.625698 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/crc-debug-mntkv" Nov 23 10:15:43 crc kubenswrapper[5028]: I1123 10:15:43.068781 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35c66dc-79dc-4317-85d9-f31de062a230" path="/var/lib/kubelet/pods/c35c66dc-79dc-4317-85d9-f31de062a230/volumes" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.949631 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zqpzh"] Nov 23 10:16:33 crc kubenswrapper[5028]: E1123 10:16:33.951454 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35c66dc-79dc-4317-85d9-f31de062a230" containerName="container-00" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.951564 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35c66dc-79dc-4317-85d9-f31de062a230" containerName="container-00" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.952683 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35c66dc-79dc-4317-85d9-f31de062a230" containerName="container-00" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.954489 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.970089 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqpzh"] Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.976370 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp97l\" (UniqueName: \"kubernetes.io/projected/dd6b49da-c391-4682-875a-4e3a5e10feb3-kube-api-access-dp97l\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.976625 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-catalog-content\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:33 crc kubenswrapper[5028]: I1123 10:16:33.976715 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-utilities\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.079579 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-catalog-content\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.079649 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-utilities\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.079787 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp97l\" (UniqueName: \"kubernetes.io/projected/dd6b49da-c391-4682-875a-4e3a5e10feb3-kube-api-access-dp97l\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.080696 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-catalog-content\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.080925 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-utilities\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.102730 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp97l\" (UniqueName: \"kubernetes.io/projected/dd6b49da-c391-4682-875a-4e3a5e10feb3-kube-api-access-dp97l\") pod \"redhat-marketplace-zqpzh\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.283817 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:34 crc kubenswrapper[5028]: I1123 10:16:34.828297 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqpzh"] Nov 23 10:16:35 crc kubenswrapper[5028]: I1123 10:16:35.375547 5028 generic.go:334] "Generic (PLEG): container finished" podID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerID="88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09" exitCode=0 Nov 23 10:16:35 crc kubenswrapper[5028]: I1123 10:16:35.375831 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerDied","Data":"88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09"} Nov 23 10:16:35 crc kubenswrapper[5028]: I1123 10:16:35.375861 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerStarted","Data":"895ded25fcf0ac3444ff619979c6651143bde7ed86782d8e3a529ca594958631"} Nov 23 10:16:36 crc kubenswrapper[5028]: I1123 10:16:36.390716 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerStarted","Data":"eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e"} Nov 23 10:16:37 crc kubenswrapper[5028]: I1123 10:16:37.405781 5028 generic.go:334] "Generic (PLEG): container finished" podID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerID="eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e" exitCode=0 Nov 23 10:16:37 crc kubenswrapper[5028]: I1123 10:16:37.405855 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerDied","Data":"eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e"} Nov 23 10:16:38 crc kubenswrapper[5028]: I1123 10:16:38.420589 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerStarted","Data":"c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921"} Nov 23 10:16:38 crc kubenswrapper[5028]: I1123 10:16:38.444324 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zqpzh" podStartSLOduration=2.99074796 podStartE2EDuration="5.444295052s" podCreationTimestamp="2025-11-23 10:16:33 +0000 UTC" firstStartedPulling="2025-11-23 10:16:35.378457358 +0000 UTC m=+12379.075862147" lastFinishedPulling="2025-11-23 10:16:37.83200445 +0000 UTC m=+12381.529409239" observedRunningTime="2025-11-23 10:16:38.442466247 +0000 UTC m=+12382.139871046" watchObservedRunningTime="2025-11-23 10:16:38.444295052 +0000 UTC m=+12382.141699841" Nov 23 10:16:44 crc kubenswrapper[5028]: I1123 10:16:44.284682 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:44 crc kubenswrapper[5028]: I1123 10:16:44.285259 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:44 crc kubenswrapper[5028]: I1123 10:16:44.338510 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:44 crc kubenswrapper[5028]: I1123 10:16:44.677097 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:44 crc kubenswrapper[5028]: I1123 10:16:44.752337 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqpzh"] Nov 23 10:16:46 crc kubenswrapper[5028]: I1123 10:16:46.636436 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zqpzh" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="registry-server" containerID="cri-o://c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921" gracePeriod=2 Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.152161 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.332390 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-utilities\") pod \"dd6b49da-c391-4682-875a-4e3a5e10feb3\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.332874 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-catalog-content\") pod \"dd6b49da-c391-4682-875a-4e3a5e10feb3\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.333033 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp97l\" (UniqueName: \"kubernetes.io/projected/dd6b49da-c391-4682-875a-4e3a5e10feb3-kube-api-access-dp97l\") pod \"dd6b49da-c391-4682-875a-4e3a5e10feb3\" (UID: \"dd6b49da-c391-4682-875a-4e3a5e10feb3\") " Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.333197 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-utilities" (OuterVolumeSpecName: "utilities") pod "dd6b49da-c391-4682-875a-4e3a5e10feb3" (UID: "dd6b49da-c391-4682-875a-4e3a5e10feb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.334211 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.340065 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6b49da-c391-4682-875a-4e3a5e10feb3-kube-api-access-dp97l" (OuterVolumeSpecName: "kube-api-access-dp97l") pod "dd6b49da-c391-4682-875a-4e3a5e10feb3" (UID: "dd6b49da-c391-4682-875a-4e3a5e10feb3"). InnerVolumeSpecName "kube-api-access-dp97l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.351090 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd6b49da-c391-4682-875a-4e3a5e10feb3" (UID: "dd6b49da-c391-4682-875a-4e3a5e10feb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.435636 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp97l\" (UniqueName: \"kubernetes.io/projected/dd6b49da-c391-4682-875a-4e3a5e10feb3-kube-api-access-dp97l\") on node \"crc\" DevicePath \"\"" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.435673 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd6b49da-c391-4682-875a-4e3a5e10feb3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.652498 5028 generic.go:334] "Generic (PLEG): container finished" podID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerID="c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921" exitCode=0 Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.652548 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerDied","Data":"c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921"} Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.652584 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqpzh" event={"ID":"dd6b49da-c391-4682-875a-4e3a5e10feb3","Type":"ContainerDied","Data":"895ded25fcf0ac3444ff619979c6651143bde7ed86782d8e3a529ca594958631"} Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.652598 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqpzh" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.652603 5028 scope.go:117] "RemoveContainer" containerID="c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.675169 5028 scope.go:117] "RemoveContainer" containerID="eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.701757 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqpzh"] Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.722664 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqpzh"] Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.724571 5028 scope.go:117] "RemoveContainer" containerID="88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.786280 5028 scope.go:117] "RemoveContainer" containerID="c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921" Nov 23 10:16:47 crc kubenswrapper[5028]: E1123 10:16:47.786804 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921\": container with ID starting with c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921 not found: ID does not exist" containerID="c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.786853 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921"} err="failed to get container status \"c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921\": rpc error: code = NotFound desc = could not find container \"c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921\": container with ID starting with c06e99f7df66653bcbd5133b1ad2d221e23299659e82433c7117e67ba19be921 not found: ID does not exist" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.786887 5028 scope.go:117] "RemoveContainer" containerID="eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e" Nov 23 10:16:47 crc kubenswrapper[5028]: E1123 10:16:47.787250 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e\": container with ID starting with eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e not found: ID does not exist" containerID="eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.787290 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e"} err="failed to get container status \"eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e\": rpc error: code = NotFound desc = could not find container \"eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e\": container with ID starting with eb9c75c39cbddeeb5ba2e9f6265d41c8a7bb78c1b444c3541461fc906d412c2e not found: ID does not exist" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.787323 5028 scope.go:117] "RemoveContainer" containerID="88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09" Nov 23 10:16:47 crc kubenswrapper[5028]: E1123 10:16:47.787529 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09\": container with ID starting with 88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09 not found: ID does not exist" containerID="88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09" Nov 23 10:16:47 crc kubenswrapper[5028]: I1123 10:16:47.787557 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09"} err="failed to get container status \"88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09\": rpc error: code = NotFound desc = could not find container \"88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09\": container with ID starting with 88bc618969b5bb32b5beeae51ec69645858925f9640635a17ad7e214864e4f09 not found: ID does not exist" Nov 23 10:16:49 crc kubenswrapper[5028]: I1123 10:16:49.071247 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" path="/var/lib/kubelet/pods/dd6b49da-c391-4682-875a-4e3a5e10feb3/volumes" Nov 23 10:17:30 crc kubenswrapper[5028]: I1123 10:17:30.946312 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:17:30 crc kubenswrapper[5028]: I1123 10:17:30.947012 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:18:00 crc kubenswrapper[5028]: I1123 10:18:00.946480 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:18:00 crc kubenswrapper[5028]: I1123 10:18:00.947752 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:18:30 crc kubenswrapper[5028]: I1123 10:18:30.946742 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:18:30 crc kubenswrapper[5028]: I1123 10:18:30.947635 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:18:30 crc kubenswrapper[5028]: I1123 10:18:30.947722 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 10:18:30 crc kubenswrapper[5028]: I1123 10:18:30.949376 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 10:18:30 crc kubenswrapper[5028]: I1123 10:18:30.949497 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" gracePeriod=600 Nov 23 10:18:31 crc kubenswrapper[5028]: E1123 10:18:31.083227 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:18:31 crc kubenswrapper[5028]: I1123 10:18:31.580299 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" exitCode=0 Nov 23 10:18:31 crc kubenswrapper[5028]: I1123 10:18:31.580462 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788"} Nov 23 10:18:31 crc kubenswrapper[5028]: I1123 10:18:31.580876 5028 scope.go:117] "RemoveContainer" containerID="fbae17c4db9708f07496e8e8f9ccd88b3899177fc161486b65a98f23acff3689" Nov 23 10:18:31 crc kubenswrapper[5028]: I1123 10:18:31.582189 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:18:31 crc kubenswrapper[5028]: E1123 10:18:31.582912 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:18:43 crc kubenswrapper[5028]: I1123 10:18:43.054230 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:18:43 crc kubenswrapper[5028]: E1123 10:18:43.056165 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.302123 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wcngw"] Nov 23 10:18:45 crc kubenswrapper[5028]: E1123 10:18:45.303084 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="registry-server" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.303098 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="registry-server" Nov 23 10:18:45 crc kubenswrapper[5028]: E1123 10:18:45.303110 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="extract-utilities" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.303117 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="extract-utilities" Nov 23 10:18:45 crc kubenswrapper[5028]: E1123 10:18:45.303141 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="extract-content" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.303147 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="extract-content" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.303377 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6b49da-c391-4682-875a-4e3a5e10feb3" containerName="registry-server" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.305021 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.319892 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcngw"] Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.382831 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-utilities\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.382899 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdd7d\" (UniqueName: \"kubernetes.io/projected/185a5de6-aa2b-4515-a1d1-74591ce58d77-kube-api-access-qdd7d\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.383648 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-catalog-content\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.486835 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-catalog-content\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.486970 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-utilities\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.487027 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdd7d\" (UniqueName: \"kubernetes.io/projected/185a5de6-aa2b-4515-a1d1-74591ce58d77-kube-api-access-qdd7d\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.487539 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-catalog-content\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.487605 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-utilities\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.507966 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdd7d\" (UniqueName: \"kubernetes.io/projected/185a5de6-aa2b-4515-a1d1-74591ce58d77-kube-api-access-qdd7d\") pod \"community-operators-wcngw\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:45 crc kubenswrapper[5028]: I1123 10:18:45.644256 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:46 crc kubenswrapper[5028]: I1123 10:18:46.257076 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcngw"] Nov 23 10:18:46 crc kubenswrapper[5028]: W1123 10:18:46.270388 5028 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod185a5de6_aa2b_4515_a1d1_74591ce58d77.slice/crio-76a75572f498f3b4a1bdf73d18c80bc801497f5d90226b4e67fa35f9093dec58 WatchSource:0}: Error finding container 76a75572f498f3b4a1bdf73d18c80bc801497f5d90226b4e67fa35f9093dec58: Status 404 returned error can't find the container with id 76a75572f498f3b4a1bdf73d18c80bc801497f5d90226b4e67fa35f9093dec58 Nov 23 10:18:46 crc kubenswrapper[5028]: I1123 10:18:46.836494 5028 generic.go:334] "Generic (PLEG): container finished" podID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerID="a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650" exitCode=0 Nov 23 10:18:46 crc kubenswrapper[5028]: I1123 10:18:46.836757 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerDied","Data":"a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650"} Nov 23 10:18:46 crc kubenswrapper[5028]: I1123 10:18:46.836795 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerStarted","Data":"76a75572f498f3b4a1bdf73d18c80bc801497f5d90226b4e67fa35f9093dec58"} Nov 23 10:18:47 crc kubenswrapper[5028]: I1123 10:18:47.887575 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerStarted","Data":"771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556"} Nov 23 10:18:48 crc kubenswrapper[5028]: I1123 10:18:48.900589 5028 generic.go:334] "Generic (PLEG): container finished" podID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerID="771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556" exitCode=0 Nov 23 10:18:48 crc kubenswrapper[5028]: I1123 10:18:48.900691 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerDied","Data":"771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556"} Nov 23 10:18:49 crc kubenswrapper[5028]: I1123 10:18:49.918773 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerStarted","Data":"5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c"} Nov 23 10:18:49 crc kubenswrapper[5028]: I1123 10:18:49.937703 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wcngw" podStartSLOduration=2.512512626 podStartE2EDuration="4.937678744s" podCreationTimestamp="2025-11-23 10:18:45 +0000 UTC" firstStartedPulling="2025-11-23 10:18:46.839601978 +0000 UTC m=+12510.537006757" lastFinishedPulling="2025-11-23 10:18:49.264768096 +0000 UTC m=+12512.962172875" observedRunningTime="2025-11-23 10:18:49.937015428 +0000 UTC m=+12513.634420207" watchObservedRunningTime="2025-11-23 10:18:49.937678744 +0000 UTC m=+12513.635083533" Nov 23 10:18:55 crc kubenswrapper[5028]: I1123 10:18:55.644843 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:55 crc kubenswrapper[5028]: I1123 10:18:55.647599 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:55 crc kubenswrapper[5028]: I1123 10:18:55.749455 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:56 crc kubenswrapper[5028]: I1123 10:18:56.053906 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:18:56 crc kubenswrapper[5028]: E1123 10:18:56.054457 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:18:56 crc kubenswrapper[5028]: I1123 10:18:56.081114 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:56 crc kubenswrapper[5028]: I1123 10:18:56.151078 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcngw"] Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.032871 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wcngw" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="registry-server" containerID="cri-o://5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c" gracePeriod=2 Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.606150 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.782358 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-catalog-content\") pod \"185a5de6-aa2b-4515-a1d1-74591ce58d77\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.782463 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-utilities\") pod \"185a5de6-aa2b-4515-a1d1-74591ce58d77\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.782743 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdd7d\" (UniqueName: \"kubernetes.io/projected/185a5de6-aa2b-4515-a1d1-74591ce58d77-kube-api-access-qdd7d\") pod \"185a5de6-aa2b-4515-a1d1-74591ce58d77\" (UID: \"185a5de6-aa2b-4515-a1d1-74591ce58d77\") " Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.784182 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-utilities" (OuterVolumeSpecName: "utilities") pod "185a5de6-aa2b-4515-a1d1-74591ce58d77" (UID: "185a5de6-aa2b-4515-a1d1-74591ce58d77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.799592 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185a5de6-aa2b-4515-a1d1-74591ce58d77-kube-api-access-qdd7d" (OuterVolumeSpecName: "kube-api-access-qdd7d") pod "185a5de6-aa2b-4515-a1d1-74591ce58d77" (UID: "185a5de6-aa2b-4515-a1d1-74591ce58d77"). InnerVolumeSpecName "kube-api-access-qdd7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.862193 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "185a5de6-aa2b-4515-a1d1-74591ce58d77" (UID: "185a5de6-aa2b-4515-a1d1-74591ce58d77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.885099 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.885130 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdd7d\" (UniqueName: \"kubernetes.io/projected/185a5de6-aa2b-4515-a1d1-74591ce58d77-kube-api-access-qdd7d\") on node \"crc\" DevicePath \"\"" Nov 23 10:18:58 crc kubenswrapper[5028]: I1123 10:18:58.885142 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185a5de6-aa2b-4515-a1d1-74591ce58d77-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.054342 5028 generic.go:334] "Generic (PLEG): container finished" podID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerID="5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c" exitCode=0 Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.054463 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcngw" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.078889 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerDied","Data":"5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c"} Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.079028 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcngw" event={"ID":"185a5de6-aa2b-4515-a1d1-74591ce58d77","Type":"ContainerDied","Data":"76a75572f498f3b4a1bdf73d18c80bc801497f5d90226b4e67fa35f9093dec58"} Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.079080 5028 scope.go:117] "RemoveContainer" containerID="5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.116336 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcngw"] Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.131300 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wcngw"] Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.134435 5028 scope.go:117] "RemoveContainer" containerID="771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.180033 5028 scope.go:117] "RemoveContainer" containerID="a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.241431 5028 scope.go:117] "RemoveContainer" containerID="5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c" Nov 23 10:18:59 crc kubenswrapper[5028]: E1123 10:18:59.242015 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c\": container with ID starting with 5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c not found: ID does not exist" containerID="5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.242067 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c"} err="failed to get container status \"5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c\": rpc error: code = NotFound desc = could not find container \"5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c\": container with ID starting with 5849b6a04f0e08dcc7eef2f2ddc679d21bf4a522e9bf9a75ffb3f2d77416de2c not found: ID does not exist" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.242101 5028 scope.go:117] "RemoveContainer" containerID="771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556" Nov 23 10:18:59 crc kubenswrapper[5028]: E1123 10:18:59.242581 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556\": container with ID starting with 771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556 not found: ID does not exist" containerID="771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.242759 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556"} err="failed to get container status \"771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556\": rpc error: code = NotFound desc = could not find container \"771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556\": container with ID starting with 771b8cb09d15f2b336fcebdf5f6661123633c13bcfcea3bd2167a0a1d8828556 not found: ID does not exist" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.242809 5028 scope.go:117] "RemoveContainer" containerID="a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650" Nov 23 10:18:59 crc kubenswrapper[5028]: E1123 10:18:59.243228 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650\": container with ID starting with a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650 not found: ID does not exist" containerID="a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650" Nov 23 10:18:59 crc kubenswrapper[5028]: I1123 10:18:59.243267 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650"} err="failed to get container status \"a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650\": rpc error: code = NotFound desc = could not find container \"a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650\": container with ID starting with a6912bc4f7ad0f86d9d21a8634d8c350c1e12b21c965b8267eed528aca5e2650 not found: ID does not exist" Nov 23 10:19:01 crc kubenswrapper[5028]: I1123 10:19:01.074848 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" path="/var/lib/kubelet/pods/185a5de6-aa2b-4515-a1d1-74591ce58d77/volumes" Nov 23 10:19:05 crc kubenswrapper[5028]: I1123 10:19:05.549065 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c7db62cc-1b79-4241-9006-7c24e5e18e21/init-config-reloader/0.log" Nov 23 10:19:05 crc kubenswrapper[5028]: I1123 10:19:05.736820 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c7db62cc-1b79-4241-9006-7c24e5e18e21/config-reloader/0.log" Nov 23 10:19:05 crc kubenswrapper[5028]: I1123 10:19:05.749418 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c7db62cc-1b79-4241-9006-7c24e5e18e21/alertmanager/0.log" Nov 23 10:19:05 crc kubenswrapper[5028]: I1123 10:19:05.778864 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_c7db62cc-1b79-4241-9006-7c24e5e18e21/init-config-reloader/0.log" Nov 23 10:19:05 crc kubenswrapper[5028]: I1123 10:19:05.990352 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ed909c0c-2d7e-46ab-9c04-5fa86f5884e6/aodh-evaluator/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.010444 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ed909c0c-2d7e-46ab-9c04-5fa86f5884e6/aodh-listener/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.036019 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ed909c0c-2d7e-46ab-9c04-5fa86f5884e6/aodh-api/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.098185 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_ed909c0c-2d7e-46ab-9c04-5fa86f5884e6/aodh-notifier/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.253406 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57cd499dd6-rvkzk_f4191463-b4c9-4d75-b00e-853e28f4ec88/barbican-api/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.320218 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-57cd499dd6-rvkzk_f4191463-b4c9-4d75-b00e-853e28f4ec88/barbican-api-log/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.451904 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8b8c9c8f4-mvckm_e6d4e170-a8f0-4e35-8db5-edd058b05027/barbican-keystone-listener/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.640602 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58c79c4ff5-mls6f_5c24efbd-75c6-4233-86d9-6b04095d8bad/barbican-worker/0.log" Nov 23 10:19:06 crc kubenswrapper[5028]: I1123 10:19:06.703356 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-58c79c4ff5-mls6f_5c24efbd-75c6-4233-86d9-6b04095d8bad/barbican-worker-log/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.016667 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-9vfht_0fa4a160-aa17-4390-aa9a-8f2fba7c9836/bootstrap-openstack-openstack-cell1/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.060795 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:19:07 crc kubenswrapper[5028]: E1123 10:19:07.061171 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.259782 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-networker-zrj2l_71b95d4c-b5f3-457d-bb73-c63c1d9f04f5/bootstrap-openstack-openstack-networker/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.264127 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-8b8c9c8f4-mvckm_e6d4e170-a8f0-4e35-8db5-edd058b05027/barbican-keystone-listener-log/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.345303 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ecb410a7-3a3a-433a-a7a7-a3120c5e433a/ceilometer-central-agent/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.456710 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ecb410a7-3a3a-433a-a7a7-a3120c5e433a/ceilometer-notification-agent/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.478403 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ecb410a7-3a3a-433a-a7a7-a3120c5e433a/proxy-httpd/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.502929 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ecb410a7-3a3a-433a-a7a7-a3120c5e433a/sg-core/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.666519 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-openstack-openstack-cell1-2pkl7_72ef3240-29e1-4d7a-adae-a9c4916b6b72/ceph-client-openstack-openstack-cell1/0.log" Nov 23 10:19:07 crc kubenswrapper[5028]: I1123 10:19:07.994982 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e790c829-88fb-40db-a145-c90769f04d24/cinder-api/0.log" Nov 23 10:19:08 crc kubenswrapper[5028]: I1123 10:19:08.167469 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e790c829-88fb-40db-a145-c90769f04d24/cinder-api-log/0.log" Nov 23 10:19:08 crc kubenswrapper[5028]: I1123 10:19:08.318613 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf/probe/0.log" Nov 23 10:19:08 crc kubenswrapper[5028]: I1123 10:19:08.468455 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1d3e2c69-0b3f-4154-a18a-c6ad665cbc58/cinder-scheduler/0.log" Nov 23 10:19:08 crc kubenswrapper[5028]: I1123 10:19:08.559414 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1d3e2c69-0b3f-4154-a18a-c6ad665cbc58/probe/0.log" Nov 23 10:19:08 crc kubenswrapper[5028]: I1123 10:19:08.811778 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_5e0d5e1f-6ca1-49c6-b33d-407da6c3cccf/cinder-backup/0.log" Nov 23 10:19:08 crc kubenswrapper[5028]: I1123 10:19:08.938219 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0d8dd319-e596-4062-8aa2-9637c332a0d7/probe/0.log" Nov 23 10:19:09 crc kubenswrapper[5028]: I1123 10:19:09.079381 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-6zfpv_052ccf3b-c34b-4dc5-a81a-0aeec151c343/configure-network-openstack-openstack-cell1/0.log" Nov 23 10:19:09 crc kubenswrapper[5028]: I1123 10:19:09.256689 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-networker-fzgtv_1ecadc08-3d9f-4d0c-b36b-57a5631f71a0/configure-network-openstack-openstack-networker/0.log" Nov 23 10:19:09 crc kubenswrapper[5028]: I1123 10:19:09.304190 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-kjpnf_7ee6d0e6-9f18-4e06-8334-685ed129a0c4/configure-os-openstack-openstack-cell1/0.log" Nov 23 10:19:09 crc kubenswrapper[5028]: I1123 10:19:09.571635 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-networker-cfwxv_bdb64a24-7e0c-451f-b487-7a17e61c0743/configure-os-openstack-openstack-networker/0.log" Nov 23 10:19:09 crc kubenswrapper[5028]: I1123 10:19:09.605207 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6654b5fc9-9p92f_fc89900f-4d23-44c4-bbda-354bd9203efd/init/0.log" Nov 23 10:19:09 crc kubenswrapper[5028]: I1123 10:19:09.855872 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6654b5fc9-9p92f_fc89900f-4d23-44c4-bbda-354bd9203efd/init/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.007742 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-cm946_c73c8666-ed55-4274-a8c4-de56ff21909e/download-cache-openstack-openstack-cell1/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.068071 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6654b5fc9-9p92f_fc89900f-4d23-44c4-bbda-354bd9203efd/dnsmasq-dns/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.262213 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-networker-pfwrm_d7743159-c6ec-414e-8bd2-523769405308/download-cache-openstack-openstack-networker/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.472825 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c1137a64-baac-4b8d-a196-2980fa226fc6/glance-log/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.499095 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c1137a64-baac-4b8d-a196-2980fa226fc6/glance-httpd/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.592556 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_0d8dd319-e596-4062-8aa2-9637c332a0d7/cinder-volume/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.691484 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_72a6f394-ec46-463f-b427-90b451766614/glance-httpd/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.724524 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_72a6f394-ec46-463f-b427-90b451766614/glance-log/0.log" Nov 23 10:19:10 crc kubenswrapper[5028]: I1123 10:19:10.898775 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5686cf9857-crmbj_c9451837-c860-4f58-875e-1394ca0bc0fc/heat-api/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.105829 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7bc7d76d4f-pp486_98ebe7a4-c6a0-4179-8db8-e164a706b3a6/heat-cfnapi/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.153226 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-799587747-2v7jz_2c63a7d1-5df2-41bc-8896-942c44597e22/heat-engine/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.287052 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b8b66b649-ppjcx_2457f13b-dc7a-450c-b083-00edbc261f14/horizon/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.410407 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-l7q6n_abc38b1a-53f1-46e7-814d-eb2f2a1ee989/install-certs-openstack-openstack-cell1/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.432106 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b8b66b649-ppjcx_2457f13b-dc7a-450c-b083-00edbc261f14/horizon-log/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.523020 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-networker-5p94z_435a8ca8-e3b4-46e7-83f5-78080ddeeb67/install-certs-openstack-openstack-networker/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.650379 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-9p8rv_0867f0cd-0d42-4b6f-826c-4ec51f20df02/install-os-openstack-openstack-cell1/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.785210 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-networker-qgscq_819ee5e5-ede2-4053-9199-247708921b7b/install-os-openstack-openstack-networker/0.log" Nov 23 10:19:11 crc kubenswrapper[5028]: I1123 10:19:11.932126 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29398141-vpp8f_acd0873c-30c3-44dc-a1d3-0d7837dac457/keystone-cron/0.log" Nov 23 10:19:12 crc kubenswrapper[5028]: I1123 10:19:12.139161 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29398201-8hbgz_7d03980b-1bc7-40e6-890f-e7412777f302/keystone-cron/0.log" Nov 23 10:19:12 crc kubenswrapper[5028]: I1123 10:19:12.233747 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3e1ee542-5c64-4f1f-884e-959cdbee781c/kube-state-metrics/0.log" Nov 23 10:19:12 crc kubenswrapper[5028]: I1123 10:19:12.486502 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-f6p9x_da2b8220-d1d8-40ae-a96a-54e3a1c13c10/libvirt-openstack-openstack-cell1/0.log" Nov 23 10:19:12 crc kubenswrapper[5028]: I1123 10:19:12.899415 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_07392e5e-2fb2-4582-baf3-94393eed0373/manila-scheduler/0.log" Nov 23 10:19:12 crc kubenswrapper[5028]: I1123 10:19:12.995084 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_07392e5e-2fb2-4582-baf3-94393eed0373/probe/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.020446 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5fdfc97958-gprrf_f0dd1917-f821-48bf-bb71-f21b4116334d/keystone-api/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.087256 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_fd9b3987-ceb5-4869-bd2b-5892218da671/manila-api/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.166848 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_fd9b3987-ceb5-4869-bd2b-5892218da671/manila-api-log/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.320162 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_dc6b3e97-3b88-45f9-9893-160420459404/probe/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.321888 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_dc6b3e97-3b88-45f9-9893-160420459404/manila-share/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.853469 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-x6q9z_74331472-e7a6-4f7a-a7e9-32c195f1e4cf/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 23 10:19:13 crc kubenswrapper[5028]: I1123 10:19:13.924152 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c89b65897-cxm9k_a25194a0-0614-4490-bb9a-c184114469f2/neutron-httpd/0.log" Nov 23 10:19:14 crc kubenswrapper[5028]: I1123 10:19:14.197611 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-sdbhj_30480d6a-dd5f-4f67-9557-a45343d87a65/neutron-metadata-openstack-openstack-cell1/0.log" Nov 23 10:19:14 crc kubenswrapper[5028]: I1123 10:19:14.374046 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-c89b65897-cxm9k_a25194a0-0614-4490-bb9a-c184114469f2/neutron-api/0.log" Nov 23 10:19:14 crc kubenswrapper[5028]: I1123 10:19:14.554327 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-networker-tv8lb_1b88c41d-a1f0-4e15-9f88-4d7cb4602f12/neutron-metadata-openstack-openstack-networker/0.log" Nov 23 10:19:14 crc kubenswrapper[5028]: I1123 10:19:14.567115 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-6rrkc_2ec4b9a2-a3fa-4495-9205-e41160705fda/neutron-sriov-openstack-openstack-cell1/0.log" Nov 23 10:19:15 crc kubenswrapper[5028]: I1123 10:19:15.011695 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_edffb306-8a5c-4842-9d43-126018e87996/nova-api-api/0.log" Nov 23 10:19:15 crc kubenswrapper[5028]: I1123 10:19:15.095919 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_eef17367-78dc-4965-b642-ce9491d8c0af/nova-cell0-conductor-conductor/0.log" Nov 23 10:19:15 crc kubenswrapper[5028]: I1123 10:19:15.361148 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_edffb306-8a5c-4842-9d43-126018e87996/nova-api-log/0.log" Nov 23 10:19:15 crc kubenswrapper[5028]: I1123 10:19:15.433121 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1e50a21f-9dd3-4f96-b7fe-1ffb83478c3d/nova-cell1-conductor-conductor/0.log" Nov 23 10:19:15 crc kubenswrapper[5028]: I1123 10:19:15.518826 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6d698ed1-a5a2-47f9-9fc0-430fe08a8909/nova-cell1-novncproxy-novncproxy/0.log" Nov 23 10:19:16 crc kubenswrapper[5028]: I1123 10:19:16.017267 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellvg6gj_05b27132-4980-4bfa-97b0-463d53cd4486/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 23 10:19:16 crc kubenswrapper[5028]: I1123 10:19:16.186136 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-jz4qs_067cfd9c-502a-4656-b495-b43dffc143a8/nova-cell1-openstack-openstack-cell1/0.log" Nov 23 10:19:16 crc kubenswrapper[5028]: I1123 10:19:16.454005 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_92a53659-03ef-4bd3-940f-cf8528b8012d/nova-metadata-log/0.log" Nov 23 10:19:16 crc kubenswrapper[5028]: I1123 10:19:16.523598 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_92a53659-03ef-4bd3-940f-cf8528b8012d/nova-metadata-metadata/0.log" Nov 23 10:19:16 crc kubenswrapper[5028]: I1123 10:19:16.654175 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6bfc98ff-7be8-4623-8b5c-357b97b763cf/nova-scheduler-scheduler/0.log" Nov 23 10:19:16 crc kubenswrapper[5028]: I1123 10:19:16.795141 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81da84ab-3ca1-4553-887f-8159d930cc0f/mysql-bootstrap/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.082697 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81da84ab-3ca1-4553-887f-8159d930cc0f/mysql-bootstrap/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.083932 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_80e72ccf-61ae-48ba-b4e8-4dbeab319ce7/mysql-bootstrap/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.153740 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_81da84ab-3ca1-4553-887f-8159d930cc0f/galera/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.409054 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0de41a86-f333-4d3f-b4aa-e7d62efeb3a3/openstackclient/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.425970 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_80e72ccf-61ae-48ba-b4e8-4dbeab319ce7/mysql-bootstrap/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.458428 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_80e72ccf-61ae-48ba-b4e8-4dbeab319ce7/galera/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.622569 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bb69d04b-e20e-4411-bc0c-27a11ea44707/openstack-network-exporter/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.715051 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_bb69d04b-e20e-4411-bc0c-27a11ea44707/ovn-northd/0.log" Nov 23 10:19:17 crc kubenswrapper[5028]: I1123 10:19:17.970399 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-cd7kr_6e17e2f1-6fdf-4c0b-9634-d6152a4f3209/ovn-openstack-openstack-cell1/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.229226 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-networker-v4srx_9cb79473-ee84-4aef-b25e-81acc20abf95/ovn-openstack-openstack-networker/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.254422 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ff3b9990-4fd4-4e2c-bff3-2717ec516b89/openstack-network-exporter/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.315796 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ff3b9990-4fd4-4e2c-bff3-2717ec516b89/ovsdbserver-nb/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.502354 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_183bb332-8d63-4a1e-bce1-d739b4924f4a/openstack-network-exporter/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.506280 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_183bb332-8d63-4a1e-bce1-d739b4924f4a/ovsdbserver-nb/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.696645 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_791bb325-d8f6-48bc-8b4d-1fca822131f9/openstack-network-exporter/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.827548 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_791bb325-d8f6-48bc-8b4d-1fca822131f9/ovsdbserver-nb/0.log" Nov 23 10:19:18 crc kubenswrapper[5028]: I1123 10:19:18.858240 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_19116d9f-b4aa-4c04-9e25-35535d32165a/openstack-network-exporter/0.log" Nov 23 10:19:19 crc kubenswrapper[5028]: I1123 10:19:19.016007 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_19116d9f-b4aa-4c04-9e25-35535d32165a/ovsdbserver-sb/0.log" Nov 23 10:19:19 crc kubenswrapper[5028]: I1123 10:19:19.071503 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_b58dc09f-661c-4742-8b98-c92a2ce35664/openstack-network-exporter/0.log" Nov 23 10:19:19 crc kubenswrapper[5028]: I1123 10:19:19.165881 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_b58dc09f-661c-4742-8b98-c92a2ce35664/ovsdbserver-sb/0.log" Nov 23 10:19:19 crc kubenswrapper[5028]: I1123 10:19:19.341692 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_3d7e60b0-7a1f-49b9-aeaf-19c92d93008d/openstack-network-exporter/0.log" Nov 23 10:19:19 crc kubenswrapper[5028]: I1123 10:19:19.351070 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_3d7e60b0-7a1f-49b9-aeaf-19c92d93008d/ovsdbserver-sb/0.log" Nov 23 10:19:19 crc kubenswrapper[5028]: I1123 10:19:19.878763 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d958d448-ghwp2_f333e06c-4c38-44d0-a316-6f4882382b73/placement-api/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.031583 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-clmltb_4190418b-300b-449e-9219-bf0d0aec75c6/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.053412 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:19:20 crc kubenswrapper[5028]: E1123 10:19:20.053650 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.237033 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d958d448-ghwp2_f333e06c-4c38-44d0-a316-6f4882382b73/placement-log/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.260137 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-nhjztz_32cb05a6-b2bc-4434-a7eb-9aae488e4dc9/pre-adoption-validation-openstack-pre-adoption-openstack-networ/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.435124 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f5dc4c10-d411-4a0e-b0de-5e9191d87531/init-config-reloader/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.675332 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f5dc4c10-d411-4a0e-b0de-5e9191d87531/config-reloader/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.705301 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f5dc4c10-d411-4a0e-b0de-5e9191d87531/init-config-reloader/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.762006 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f5dc4c10-d411-4a0e-b0de-5e9191d87531/thanos-sidecar/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.767080 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f5dc4c10-d411-4a0e-b0de-5e9191d87531/prometheus/0.log" Nov 23 10:19:20 crc kubenswrapper[5028]: I1123 10:19:20.980682 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4d864f35-ce70-4dde-adc8-94ba2a94b937/setup-container/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.277838 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4d864f35-ce70-4dde-adc8-94ba2a94b937/setup-container/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.344456 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4d864f35-ce70-4dde-adc8-94ba2a94b937/rabbitmq/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.357159 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e8f2a752-290c-4eaf-9311-d1f13cf93264/setup-container/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.514024 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e8f2a752-290c-4eaf-9311-d1f13cf93264/setup-container/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.607730 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e8f2a752-290c-4eaf-9311-d1f13cf93264/rabbitmq/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.651190 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-tzv94_cada4892-6d66-4825-8921-ff00960f0b66/reboot-os-openstack-openstack-cell1/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.855806 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-networker-whpjr_683efa16-72c3-46fd-a5b8-82b41754468e/reboot-os-openstack-openstack-networker/0.log" Nov 23 10:19:21 crc kubenswrapper[5028]: I1123 10:19:21.922079 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-lmc6x_677bbe2f-39e2-46e5-ad32-4234b984dbe3/run-os-openstack-openstack-cell1/0.log" Nov 23 10:19:22 crc kubenswrapper[5028]: I1123 10:19:22.199283 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-networker-jpws6_a8249946-f816-404d-bc42-ad98c813df1e/run-os-openstack-openstack-networker/0.log" Nov 23 10:19:22 crc kubenswrapper[5028]: I1123 10:19:22.274716 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-vhwqv_e482f4c5-016f-44df-85f2-eab9a442ba9c/ssh-known-hosts-openstack/0.log" Nov 23 10:19:22 crc kubenswrapper[5028]: I1123 10:19:22.566770 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-hxr5g_2a7e62e6-7eca-4f20-821f-fc8c61b58dda/telemetry-openstack-openstack-cell1/0.log" Nov 23 10:19:22 crc kubenswrapper[5028]: I1123 10:19:22.622519 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_621da467-543c-4ecf-80cc-fa2bb98d7a68/tempest-tests-tempest-tests-runner/0.log" Nov 23 10:19:22 crc kubenswrapper[5028]: I1123 10:19:22.874403 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_fb1317be-089f-4970-a1e6-aeeef05af72b/test-operator-logs-container/0.log" Nov 23 10:19:22 crc kubenswrapper[5028]: I1123 10:19:22.927023 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-96f58_c3203607-0919-4770-9464-326d5b95d8ad/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 23 10:19:23 crc kubenswrapper[5028]: I1123 10:19:23.251097 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-networker-hnbw4_1da50721-0bdc-4704-9a11-99c1b786a8bc/tripleo-cleanup-tripleo-cleanup-openstack-networker/0.log" Nov 23 10:19:23 crc kubenswrapper[5028]: I1123 10:19:23.276225 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-hqkw7_aea6c6e2-e241-4283-bac6-1417dd1c2e8d/validate-network-openstack-openstack-cell1/0.log" Nov 23 10:19:23 crc kubenswrapper[5028]: I1123 10:19:23.479674 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-networker-4fc87_fb11ea62-5c93-4e7a-8c64-bc843b862244/validate-network-openstack-openstack-networker/0.log" Nov 23 10:19:32 crc kubenswrapper[5028]: I1123 10:19:32.055670 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:19:32 crc kubenswrapper[5028]: E1123 10:19:32.059770 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:19:34 crc kubenswrapper[5028]: I1123 10:19:34.603900 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f454614a-030b-4c07-ac7e-633eb08e37b1/memcached/0.log" Nov 23 10:19:45 crc kubenswrapper[5028]: I1123 10:19:45.054234 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:19:45 crc kubenswrapper[5028]: E1123 10:19:45.056751 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.166473 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/util/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.384916 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/pull/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.418325 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/util/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.444652 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/pull/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.614301 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/util/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.627564 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/pull/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.676227 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287t6qgw_8a79fbbe-cddf-4c2c-aaeb-1ccf2e3f0065/extract/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.890893 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-zr9nj_cfecab10-1421-49e5-9a36-f14bc9a61340/kube-rbac-proxy/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.928269 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-bwdxx_b4535624-ea6d-4e72-be76-c37915bcfe54/kube-rbac-proxy/0.log" Nov 23 10:19:50 crc kubenswrapper[5028]: I1123 10:19:50.979991 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-zr9nj_cfecab10-1421-49e5-9a36-f14bc9a61340/manager/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.185967 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-bwdxx_b4535624-ea6d-4e72-be76-c37915bcfe54/manager/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.216749 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-fr59d_76d87e89-eead-45c1-89b0-053b0e595751/kube-rbac-proxy/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.252111 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-fr59d_76d87e89-eead-45c1-89b0-053b0e595751/manager/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.406807 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-85pdk_b042c881-3ca3-44d1-916a-1ed4205b66e1/kube-rbac-proxy/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.550419 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-6cwkk_8ef3e26b-808d-455b-a88e-1fc7d5f81fc3/kube-rbac-proxy/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.591224 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-85pdk_b042c881-3ca3-44d1-916a-1ed4205b66e1/manager/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.700408 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-6cwkk_8ef3e26b-808d-455b-a88e-1fc7d5f81fc3/manager/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.776731 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-hscqp_400d2d41-03cf-4d6d-966b-c1676ec373d6/kube-rbac-proxy/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.784118 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-hscqp_400d2d41-03cf-4d6d-966b-c1676ec373d6/manager/0.log" Nov 23 10:19:51 crc kubenswrapper[5028]: I1123 10:19:51.926476 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-gphz8_2390b681-a671-4a61-a36d-6ec38f13f97f/kube-rbac-proxy/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.201430 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-tl2zc_9ef25d42-bb38-4a2e-9a7b-a83dd0e30344/kube-rbac-proxy/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.204574 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-tl2zc_9ef25d42-bb38-4a2e-9a7b-a83dd0e30344/manager/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.284781 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-gphz8_2390b681-a671-4a61-a36d-6ec38f13f97f/manager/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.417074 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-r9zdz_34fd9714-3561-4cc7-9713-9f2788bf5ee4/kube-rbac-proxy/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.582113 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-r9zdz_34fd9714-3561-4cc7-9713-9f2788bf5ee4/manager/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.670229 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-f2nqb_0755ad7e-aa96-4555-a4d8-dffb11e45807/manager/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.672297 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-f2nqb_0755ad7e-aa96-4555-a4d8-dffb11e45807/kube-rbac-proxy/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.825528 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-8qf7p_c2127314-0ad9-46fe-946e-a738b8bdcd12/kube-rbac-proxy/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.919073 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-8qf7p_c2127314-0ad9-46fe-946e-a738b8bdcd12/manager/0.log" Nov 23 10:19:52 crc kubenswrapper[5028]: I1123 10:19:52.952522 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-g4fnt_617b9fb7-df28-4230-a26f-41fd18a75cd7/kube-rbac-proxy/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.081910 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-g4fnt_617b9fb7-df28-4230-a26f-41fd18a75cd7/manager/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.143323 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-86q68_65571797-5661-45f7-8ec9-b87dbe97a10a/kube-rbac-proxy/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.335244 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-nbtch_b31fd080-61d4-4dea-a594-932fbbccf98b/kube-rbac-proxy/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.366873 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-86q68_65571797-5661-45f7-8ec9-b87dbe97a10a/manager/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.400451 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-nbtch_b31fd080-61d4-4dea-a594-932fbbccf98b/manager/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.560370 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd445pzzk_3f4726b9-823e-4abf-b301-6c020b882874/kube-rbac-proxy/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.608364 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd445pzzk_3f4726b9-823e-4abf-b301-6c020b882874/manager/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.809218 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-nllf2_ade56f18-e2f8-447d-8ecd-4d396affff9b/kube-rbac-proxy/0.log" Nov 23 10:19:53 crc kubenswrapper[5028]: I1123 10:19:53.917002 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-9xt7l_2f8bf0fc-0cb2-4726-96ce-c378818da6dd/kube-rbac-proxy/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.066745 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-9xt7l_2f8bf0fc-0cb2-4726-96ce-c378818da6dd/operator/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.277026 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-jtt25_00ecba7a-6f06-4513-9c6b-239606cc6462/kube-rbac-proxy/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.398972 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rc4v5_f2456c36-742d-4a62-985b-2155c0caab72/registry-server/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.462726 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-jtt25_00ecba7a-6f06-4513-9c6b-239606cc6462/manager/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.592714 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-67p4j_10bd2367-5a93-4e29-8242-737023dd21a5/kube-rbac-proxy/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.701143 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-67p4j_10bd2367-5a93-4e29-8242-737023dd21a5/manager/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.726578 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-zj6kz_ad5e138f-6210-4ef8-be27-d2b93c56b241/operator/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.892978 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-bczft_ac695e8d-ade9-44ac-8737-df03a6c712b8/kube-rbac-proxy/0.log" Nov 23 10:19:54 crc kubenswrapper[5028]: I1123 10:19:54.963739 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-bczft_ac695e8d-ade9-44ac-8737-df03a6c712b8/manager/0.log" Nov 23 10:19:55 crc kubenswrapper[5028]: I1123 10:19:55.099149 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-nfwpv_681da997-6aae-43ee-9b25-3307858c63c3/kube-rbac-proxy/0.log" Nov 23 10:19:55 crc kubenswrapper[5028]: I1123 10:19:55.253363 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-vwblx_8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa/kube-rbac-proxy/0.log" Nov 23 10:19:55 crc kubenswrapper[5028]: I1123 10:19:55.302398 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-nfwpv_681da997-6aae-43ee-9b25-3307858c63c3/manager/0.log" Nov 23 10:19:55 crc kubenswrapper[5028]: I1123 10:19:55.383730 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-vwblx_8f4a06fe-3a4a-430c-8c2f-d5e81f8243fa/manager/0.log" Nov 23 10:19:55 crc kubenswrapper[5028]: I1123 10:19:55.509865 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-mjvvx_298c6ef1-dee2-4a62-a228-aa55fcbfa1b6/kube-rbac-proxy/0.log" Nov 23 10:19:55 crc kubenswrapper[5028]: I1123 10:19:55.575624 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-mjvvx_298c6ef1-dee2-4a62-a228-aa55fcbfa1b6/manager/0.log" Nov 23 10:19:56 crc kubenswrapper[5028]: I1123 10:19:56.477386 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-nllf2_ade56f18-e2f8-447d-8ecd-4d396affff9b/manager/0.log" Nov 23 10:19:57 crc kubenswrapper[5028]: I1123 10:19:57.063317 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:19:57 crc kubenswrapper[5028]: E1123 10:19:57.063676 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:20:11 crc kubenswrapper[5028]: I1123 10:20:11.054032 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:20:11 crc kubenswrapper[5028]: E1123 10:20:11.054806 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:20:14 crc kubenswrapper[5028]: I1123 10:20:14.817072 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gc5xr_ce1e72c5-9d4f-47ff-805d-921034752820/control-plane-machine-set-operator/0.log" Nov 23 10:20:14 crc kubenswrapper[5028]: I1123 10:20:14.983238 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ct4c7_f07b6179-c5bd-4735-b0a6-37f6c8d402df/machine-api-operator/0.log" Nov 23 10:20:14 crc kubenswrapper[5028]: I1123 10:20:14.985803 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ct4c7_f07b6179-c5bd-4735-b0a6-37f6c8d402df/kube-rbac-proxy/0.log" Nov 23 10:20:26 crc kubenswrapper[5028]: I1123 10:20:26.056256 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:20:26 crc kubenswrapper[5028]: E1123 10:20:26.057505 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:20:29 crc kubenswrapper[5028]: I1123 10:20:29.425485 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-tld9r_f8240319-4d2d-4d44-87bd-96f5dba0a49c/cert-manager-controller/0.log" Nov 23 10:20:29 crc kubenswrapper[5028]: I1123 10:20:29.640924 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-7bwsd_d3928449-d408-40f9-961b-952b37cad330/cert-manager-cainjector/0.log" Nov 23 10:20:29 crc kubenswrapper[5028]: I1123 10:20:29.690207 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-h2b72_837ef15b-d82d-40d7-b355-96a88642875a/cert-manager-webhook/0.log" Nov 23 10:20:39 crc kubenswrapper[5028]: I1123 10:20:39.053275 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:20:39 crc kubenswrapper[5028]: E1123 10:20:39.054160 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:20:45 crc kubenswrapper[5028]: I1123 10:20:45.528502 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-mvh2c_425559e3-e955-4772-9ee7-b025d565655a/nmstate-console-plugin/0.log" Nov 23 10:20:45 crc kubenswrapper[5028]: I1123 10:20:45.681570 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4r98v_c397b21c-2367-4d08-8e3d-85e2c03afdc8/nmstate-handler/0.log" Nov 23 10:20:45 crc kubenswrapper[5028]: I1123 10:20:45.733300 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8jph2_21e96d6a-d3d8-4132-8fb0-522d64110450/kube-rbac-proxy/0.log" Nov 23 10:20:45 crc kubenswrapper[5028]: I1123 10:20:45.770468 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-8jph2_21e96d6a-d3d8-4132-8fb0-522d64110450/nmstate-metrics/0.log" Nov 23 10:20:45 crc kubenswrapper[5028]: I1123 10:20:45.883902 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-lhgb7_82ba86b5-c6a3-441d-a770-3c8ee2963240/nmstate-operator/0.log" Nov 23 10:20:45 crc kubenswrapper[5028]: I1123 10:20:45.989777 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-g2bqd_e5b8003c-1787-4f3b-9caa-d03c42d00c24/nmstate-webhook/0.log" Nov 23 10:20:51 crc kubenswrapper[5028]: I1123 10:20:51.054298 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:20:51 crc kubenswrapper[5028]: E1123 10:20:51.055046 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:21:02 crc kubenswrapper[5028]: I1123 10:21:02.507308 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-hmhtp_f994dd2d-8a9e-45c3-9bb3-91639e07482d/kube-rbac-proxy/0.log" Nov 23 10:21:02 crc kubenswrapper[5028]: I1123 10:21:02.794402 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-frr-files/0.log" Nov 23 10:21:02 crc kubenswrapper[5028]: I1123 10:21:02.966026 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-hmhtp_f994dd2d-8a9e-45c3-9bb3-91639e07482d/controller/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.040748 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-frr-files/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.093917 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-reloader/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.094164 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-metrics/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.228791 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-reloader/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.369887 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-frr-files/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.385328 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-reloader/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.447540 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-metrics/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.447675 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-metrics/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.643479 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-frr-files/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.665821 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-reloader/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.671857 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/controller/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.680062 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/cp-metrics/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.878596 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/kube-rbac-proxy/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.882605 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/frr-metrics/0.log" Nov 23 10:21:03 crc kubenswrapper[5028]: I1123 10:21:03.971961 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/kube-rbac-proxy-frr/0.log" Nov 23 10:21:04 crc kubenswrapper[5028]: I1123 10:21:04.179371 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/reloader/0.log" Nov 23 10:21:04 crc kubenswrapper[5028]: I1123 10:21:04.228639 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-6nxm7_99cd1f08-746f-4db3-bd8f-2505bee5ce57/frr-k8s-webhook-server/0.log" Nov 23 10:21:04 crc kubenswrapper[5028]: I1123 10:21:04.455497 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c4fd7cf5d-jsllf_8730ad9d-67c7-4740-acd9-d3e5585890bb/manager/0.log" Nov 23 10:21:04 crc kubenswrapper[5028]: I1123 10:21:04.638519 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6d96c8b774-bv8l7_1ec8929e-da82-4b5e-9017-052092ec1e9a/webhook-server/0.log" Nov 23 10:21:04 crc kubenswrapper[5028]: I1123 10:21:04.724794 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hcgg8_2409cec3-0c1f-4c76-846f-2e1bf5b24258/kube-rbac-proxy/0.log" Nov 23 10:21:05 crc kubenswrapper[5028]: I1123 10:21:05.614800 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-hcgg8_2409cec3-0c1f-4c76-846f-2e1bf5b24258/speaker/0.log" Nov 23 10:21:06 crc kubenswrapper[5028]: I1123 10:21:06.053427 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:21:06 crc kubenswrapper[5028]: E1123 10:21:06.053670 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:21:07 crc kubenswrapper[5028]: I1123 10:21:07.230493 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-b8wkk_b9813473-eba8-4ab8-9778-1e96817ebcc7/frr/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.053050 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:21:20 crc kubenswrapper[5028]: E1123 10:21:20.053923 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.352310 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/util/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.483845 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/util/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.550997 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/pull/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.632963 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/pull/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.828544 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/util/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.864693 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/extract/0.log" Nov 23 10:21:20 crc kubenswrapper[5028]: I1123 10:21:20.932367 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5rpt5_fefe2539-3b32-4043-82e8-68ee18b37878/pull/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.089694 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/util/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.394764 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/pull/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.425654 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/pull/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.456747 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/util/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.695678 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/extract/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.705832 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/util/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.711846 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772egcjn9_7db5854f-5ab0-47a7-8e9a-cedb69a5d922/pull/0.log" Nov 23 10:21:21 crc kubenswrapper[5028]: I1123 10:21:21.938795 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/util/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.197065 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/util/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.416471 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/pull/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.481940 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/pull/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.688934 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/util/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.700895 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/extract/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.716006 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92109j26g_fcb68667-8c05-4e65-89d0-de18923a88cc/pull/0.log" Nov 23 10:21:22 crc kubenswrapper[5028]: I1123 10:21:22.941263 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/extract-utilities/0.log" Nov 23 10:21:23 crc kubenswrapper[5028]: I1123 10:21:23.197539 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/extract-utilities/0.log" Nov 23 10:21:23 crc kubenswrapper[5028]: I1123 10:21:23.202395 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/extract-content/0.log" Nov 23 10:21:23 crc kubenswrapper[5028]: I1123 10:21:23.215417 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/extract-content/0.log" Nov 23 10:21:23 crc kubenswrapper[5028]: I1123 10:21:23.432300 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/extract-content/0.log" Nov 23 10:21:23 crc kubenswrapper[5028]: I1123 10:21:23.471309 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/extract-utilities/0.log" Nov 23 10:21:23 crc kubenswrapper[5028]: I1123 10:21:23.708308 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/extract-utilities/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.005172 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/extract-utilities/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.059384 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/extract-content/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.081726 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/extract-content/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.260652 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/extract-utilities/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.334563 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/extract-content/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.636125 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/util/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.830777 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/pull/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.882993 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/util/0.log" Nov 23 10:21:24 crc kubenswrapper[5028]: I1123 10:21:24.942858 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/pull/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.146170 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/extract/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.159000 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/pull/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.203963 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6vt8hb_abcac30c-4771-4bbd-a67c-780b61670e1c/util/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.271328 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zs9s5_9cd6ae0b-ce93-4468-a204-e08c0781bfcb/registry-server/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.395215 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sphk5_29dda505-6228-430b-8c95-89713ee51f01/marketplace-operator/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.560218 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/extract-utilities/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.768249 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/extract-content/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.898135 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/extract-utilities/0.log" Nov 23 10:21:25 crc kubenswrapper[5028]: I1123 10:21:25.909938 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/extract-content/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.079267 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/extract-content/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.095641 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/extract-utilities/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.181753 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7c8gg_b57330ce-b7e7-4850-ac33-d0eb0438206f/registry-server/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.341420 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/extract-utilities/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.519052 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/extract-content/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.524289 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/extract-utilities/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.601281 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/extract-content/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.688936 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6xjfv_7dc64f53-e685-41c6-bf82-7448a3dd4875/registry-server/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.846044 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/extract-content/0.log" Nov 23 10:21:26 crc kubenswrapper[5028]: I1123 10:21:26.846753 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/extract-utilities/0.log" Nov 23 10:21:28 crc kubenswrapper[5028]: I1123 10:21:28.257604 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qdghg_7708271d-af3b-49ce-b67e-d6fffd0116d8/registry-server/0.log" Nov 23 10:21:35 crc kubenswrapper[5028]: I1123 10:21:35.053940 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:21:35 crc kubenswrapper[5028]: E1123 10:21:35.055969 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:21:43 crc kubenswrapper[5028]: I1123 10:21:43.563213 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-txn45_6cc451ce-ee8a-4457-9f51-b47ba8ce6b1f/prometheus-operator/0.log" Nov 23 10:21:43 crc kubenswrapper[5028]: I1123 10:21:43.995943 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5595b49fcb-956md_248c4948-5223-40e3-baec-48dfd3c4877f/prometheus-operator-admission-webhook/0.log" Nov 23 10:21:44 crc kubenswrapper[5028]: I1123 10:21:44.103106 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5595b49fcb-tjqbv_8c7c0087-1818-46f5-a3cb-44d8d6664038/prometheus-operator-admission-webhook/0.log" Nov 23 10:21:44 crc kubenswrapper[5028]: I1123 10:21:44.334131 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-6pplf_b1c5ba76-b433-4fa6-a9e5-0a5565b2b91f/operator/0.log" Nov 23 10:21:44 crc kubenswrapper[5028]: I1123 10:21:44.343382 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-nbsv6_56fa1b54-4a14-48db-81bb-77bf95b64209/perses-operator/0.log" Nov 23 10:21:48 crc kubenswrapper[5028]: I1123 10:21:48.054600 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:21:48 crc kubenswrapper[5028]: E1123 10:21:48.055570 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:22:02 crc kubenswrapper[5028]: I1123 10:22:02.053024 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:22:02 crc kubenswrapper[5028]: E1123 10:22:02.053921 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:22:16 crc kubenswrapper[5028]: I1123 10:22:16.055492 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:22:16 crc kubenswrapper[5028]: E1123 10:22:16.056818 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.296732 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8d4hm"] Nov 23 10:22:18 crc kubenswrapper[5028]: E1123 10:22:18.299248 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="extract-content" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.299264 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="extract-content" Nov 23 10:22:18 crc kubenswrapper[5028]: E1123 10:22:18.299307 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="registry-server" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.299314 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="registry-server" Nov 23 10:22:18 crc kubenswrapper[5028]: E1123 10:22:18.300122 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="extract-utilities" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.300140 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="extract-utilities" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.300441 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="185a5de6-aa2b-4515-a1d1-74591ce58d77" containerName="registry-server" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.302596 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.317372 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8d4hm"] Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.467246 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25t8x\" (UniqueName: \"kubernetes.io/projected/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-kube-api-access-25t8x\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.467603 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-catalog-content\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.467624 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-utilities\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.570488 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25t8x\" (UniqueName: \"kubernetes.io/projected/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-kube-api-access-25t8x\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.570618 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-catalog-content\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.570647 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-utilities\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.571252 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-utilities\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.571324 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-catalog-content\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.602078 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25t8x\" (UniqueName: \"kubernetes.io/projected/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-kube-api-access-25t8x\") pod \"redhat-operators-8d4hm\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:18 crc kubenswrapper[5028]: I1123 10:22:18.633660 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:19 crc kubenswrapper[5028]: I1123 10:22:19.189804 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8d4hm"] Nov 23 10:22:20 crc kubenswrapper[5028]: I1123 10:22:20.021729 5028 generic.go:334] "Generic (PLEG): container finished" podID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerID="40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c" exitCode=0 Nov 23 10:22:20 crc kubenswrapper[5028]: I1123 10:22:20.021845 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerDied","Data":"40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c"} Nov 23 10:22:20 crc kubenswrapper[5028]: I1123 10:22:20.022294 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerStarted","Data":"7a3e8479f518e093892c9159e0b282b8377849f01be201ad1d746c3b46ced572"} Nov 23 10:22:20 crc kubenswrapper[5028]: I1123 10:22:20.025196 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 10:22:21 crc kubenswrapper[5028]: I1123 10:22:21.105129 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerStarted","Data":"bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa"} Nov 23 10:22:23 crc kubenswrapper[5028]: I1123 10:22:23.097781 5028 generic.go:334] "Generic (PLEG): container finished" podID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerID="bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa" exitCode=0 Nov 23 10:22:23 crc kubenswrapper[5028]: I1123 10:22:23.098121 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerDied","Data":"bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa"} Nov 23 10:22:24 crc kubenswrapper[5028]: I1123 10:22:24.111970 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerStarted","Data":"8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035"} Nov 23 10:22:24 crc kubenswrapper[5028]: I1123 10:22:24.142264 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8d4hm" podStartSLOduration=2.435428445 podStartE2EDuration="6.142240675s" podCreationTimestamp="2025-11-23 10:22:18 +0000 UTC" firstStartedPulling="2025-11-23 10:22:20.024778133 +0000 UTC m=+12723.722182912" lastFinishedPulling="2025-11-23 10:22:23.731590353 +0000 UTC m=+12727.428995142" observedRunningTime="2025-11-23 10:22:24.138591655 +0000 UTC m=+12727.835996434" watchObservedRunningTime="2025-11-23 10:22:24.142240675 +0000 UTC m=+12727.839645464" Nov 23 10:22:28 crc kubenswrapper[5028]: I1123 10:22:28.053315 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:22:28 crc kubenswrapper[5028]: E1123 10:22:28.054421 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:22:28 crc kubenswrapper[5028]: I1123 10:22:28.634912 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:28 crc kubenswrapper[5028]: I1123 10:22:28.634998 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:29 crc kubenswrapper[5028]: I1123 10:22:29.700216 5028 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8d4hm" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="registry-server" probeResult="failure" output=< Nov 23 10:22:29 crc kubenswrapper[5028]: timeout: failed to connect service ":50051" within 1s Nov 23 10:22:29 crc kubenswrapper[5028]: > Nov 23 10:22:38 crc kubenswrapper[5028]: I1123 10:22:38.723493 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:38 crc kubenswrapper[5028]: I1123 10:22:38.806725 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:38 crc kubenswrapper[5028]: I1123 10:22:38.979176 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8d4hm"] Nov 23 10:22:39 crc kubenswrapper[5028]: I1123 10:22:39.054374 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:22:39 crc kubenswrapper[5028]: E1123 10:22:39.055056 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:22:40 crc kubenswrapper[5028]: I1123 10:22:40.351846 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8d4hm" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="registry-server" containerID="cri-o://8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035" gracePeriod=2 Nov 23 10:22:40 crc kubenswrapper[5028]: I1123 10:22:40.996339 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.157912 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25t8x\" (UniqueName: \"kubernetes.io/projected/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-kube-api-access-25t8x\") pod \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.157985 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-utilities\") pod \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.158049 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-catalog-content\") pod \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\" (UID: \"47c64e9f-6cd3-4401-9b91-d10ef692d6f0\") " Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.159546 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-utilities" (OuterVolumeSpecName: "utilities") pod "47c64e9f-6cd3-4401-9b91-d10ef692d6f0" (UID: "47c64e9f-6cd3-4401-9b91-d10ef692d6f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.190369 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-kube-api-access-25t8x" (OuterVolumeSpecName: "kube-api-access-25t8x") pod "47c64e9f-6cd3-4401-9b91-d10ef692d6f0" (UID: "47c64e9f-6cd3-4401-9b91-d10ef692d6f0"). InnerVolumeSpecName "kube-api-access-25t8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.240453 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47c64e9f-6cd3-4401-9b91-d10ef692d6f0" (UID: "47c64e9f-6cd3-4401-9b91-d10ef692d6f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.261419 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25t8x\" (UniqueName: \"kubernetes.io/projected/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-kube-api-access-25t8x\") on node \"crc\" DevicePath \"\"" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.261448 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.261459 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c64e9f-6cd3-4401-9b91-d10ef692d6f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.367014 5028 generic.go:334] "Generic (PLEG): container finished" podID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerID="8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035" exitCode=0 Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.367078 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerDied","Data":"8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035"} Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.367114 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8d4hm" event={"ID":"47c64e9f-6cd3-4401-9b91-d10ef692d6f0","Type":"ContainerDied","Data":"7a3e8479f518e093892c9159e0b282b8377849f01be201ad1d746c3b46ced572"} Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.367111 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8d4hm" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.367137 5028 scope.go:117] "RemoveContainer" containerID="8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.407221 5028 scope.go:117] "RemoveContainer" containerID="bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.429167 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8d4hm"] Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.440410 5028 scope.go:117] "RemoveContainer" containerID="40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.444888 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8d4hm"] Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.492836 5028 scope.go:117] "RemoveContainer" containerID="8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035" Nov 23 10:22:41 crc kubenswrapper[5028]: E1123 10:22:41.493571 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035\": container with ID starting with 8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035 not found: ID does not exist" containerID="8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.493608 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035"} err="failed to get container status \"8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035\": rpc error: code = NotFound desc = could not find container \"8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035\": container with ID starting with 8f2062fe41fd2b3524ce3573837f52efbb339889eaaea2d04ff74e1898bad035 not found: ID does not exist" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.493632 5028 scope.go:117] "RemoveContainer" containerID="bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa" Nov 23 10:22:41 crc kubenswrapper[5028]: E1123 10:22:41.494197 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa\": container with ID starting with bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa not found: ID does not exist" containerID="bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.494247 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa"} err="failed to get container status \"bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa\": rpc error: code = NotFound desc = could not find container \"bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa\": container with ID starting with bab87d1a21a92be667223cf9c5f06028d66c664a0c6dbbd71fe0e73e755e2cfa not found: ID does not exist" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.494282 5028 scope.go:117] "RemoveContainer" containerID="40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c" Nov 23 10:22:41 crc kubenswrapper[5028]: E1123 10:22:41.494627 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c\": container with ID starting with 40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c not found: ID does not exist" containerID="40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c" Nov 23 10:22:41 crc kubenswrapper[5028]: I1123 10:22:41.494654 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c"} err="failed to get container status \"40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c\": rpc error: code = NotFound desc = could not find container \"40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c\": container with ID starting with 40896fc8d097a98c944ca5ca9f7d6876ae5a76f269b45e8e21ae3640d4a16a7c not found: ID does not exist" Nov 23 10:22:43 crc kubenswrapper[5028]: I1123 10:22:43.068497 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" path="/var/lib/kubelet/pods/47c64e9f-6cd3-4401-9b91-d10ef692d6f0/volumes" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.361708 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wzk9c"] Nov 23 10:22:50 crc kubenswrapper[5028]: E1123 10:22:50.363819 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="extract-utilities" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.363917 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="extract-utilities" Nov 23 10:22:50 crc kubenswrapper[5028]: E1123 10:22:50.364050 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="extract-content" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.364123 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="extract-content" Nov 23 10:22:50 crc kubenswrapper[5028]: E1123 10:22:50.364204 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="registry-server" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.364272 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="registry-server" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.364638 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c64e9f-6cd3-4401-9b91-d10ef692d6f0" containerName="registry-server" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.366766 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.386820 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wzk9c"] Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.431790 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-catalog-content\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.431994 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-utilities\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.432066 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98f2\" (UniqueName: \"kubernetes.io/projected/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-kube-api-access-x98f2\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.534317 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98f2\" (UniqueName: \"kubernetes.io/projected/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-kube-api-access-x98f2\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.534495 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-catalog-content\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.534577 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-utilities\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.535184 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-utilities\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.535777 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-catalog-content\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.560744 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98f2\" (UniqueName: \"kubernetes.io/projected/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-kube-api-access-x98f2\") pod \"certified-operators-wzk9c\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:50 crc kubenswrapper[5028]: I1123 10:22:50.691814 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:22:51 crc kubenswrapper[5028]: I1123 10:22:51.257221 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wzk9c"] Nov 23 10:22:51 crc kubenswrapper[5028]: I1123 10:22:51.499906 5028 generic.go:334] "Generic (PLEG): container finished" podID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerID="0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1" exitCode=0 Nov 23 10:22:51 crc kubenswrapper[5028]: I1123 10:22:51.499970 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerDied","Data":"0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1"} Nov 23 10:22:51 crc kubenswrapper[5028]: I1123 10:22:51.500002 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerStarted","Data":"01588efae702c85fa9ffc2f7e587978a941e857585a7c521db2739f3263ada00"} Nov 23 10:22:52 crc kubenswrapper[5028]: I1123 10:22:52.521729 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerStarted","Data":"479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892"} Nov 23 10:22:53 crc kubenswrapper[5028]: I1123 10:22:53.057114 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:22:53 crc kubenswrapper[5028]: E1123 10:22:53.058456 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:22:54 crc kubenswrapper[5028]: I1123 10:22:54.558345 5028 generic.go:334] "Generic (PLEG): container finished" podID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerID="479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892" exitCode=0 Nov 23 10:22:54 crc kubenswrapper[5028]: I1123 10:22:54.558442 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerDied","Data":"479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892"} Nov 23 10:22:55 crc kubenswrapper[5028]: I1123 10:22:55.573617 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerStarted","Data":"c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c"} Nov 23 10:22:55 crc kubenswrapper[5028]: I1123 10:22:55.617171 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wzk9c" podStartSLOduration=2.122040143 podStartE2EDuration="5.617145164s" podCreationTimestamp="2025-11-23 10:22:50 +0000 UTC" firstStartedPulling="2025-11-23 10:22:51.501961618 +0000 UTC m=+12755.199366397" lastFinishedPulling="2025-11-23 10:22:54.997066599 +0000 UTC m=+12758.694471418" observedRunningTime="2025-11-23 10:22:55.610480809 +0000 UTC m=+12759.307885628" watchObservedRunningTime="2025-11-23 10:22:55.617145164 +0000 UTC m=+12759.314549963" Nov 23 10:23:00 crc kubenswrapper[5028]: I1123 10:23:00.693048 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:23:00 crc kubenswrapper[5028]: I1123 10:23:00.693773 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:23:00 crc kubenswrapper[5028]: I1123 10:23:00.780408 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:23:01 crc kubenswrapper[5028]: I1123 10:23:01.753229 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:23:01 crc kubenswrapper[5028]: I1123 10:23:01.850226 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wzk9c"] Nov 23 10:23:03 crc kubenswrapper[5028]: I1123 10:23:03.682173 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wzk9c" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="registry-server" containerID="cri-o://c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c" gracePeriod=2 Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.231117 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.321792 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-utilities\") pod \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.321894 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x98f2\" (UniqueName: \"kubernetes.io/projected/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-kube-api-access-x98f2\") pod \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.321962 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-catalog-content\") pod \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\" (UID: \"a7f0d100-fdaf-4f35-983b-7ffc397bcc21\") " Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.322648 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-utilities" (OuterVolumeSpecName: "utilities") pod "a7f0d100-fdaf-4f35-983b-7ffc397bcc21" (UID: "a7f0d100-fdaf-4f35-983b-7ffc397bcc21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.343378 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-kube-api-access-x98f2" (OuterVolumeSpecName: "kube-api-access-x98f2") pod "a7f0d100-fdaf-4f35-983b-7ffc397bcc21" (UID: "a7f0d100-fdaf-4f35-983b-7ffc397bcc21"). InnerVolumeSpecName "kube-api-access-x98f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.389213 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7f0d100-fdaf-4f35-983b-7ffc397bcc21" (UID: "a7f0d100-fdaf-4f35-983b-7ffc397bcc21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.425141 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.425198 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x98f2\" (UniqueName: \"kubernetes.io/projected/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-kube-api-access-x98f2\") on node \"crc\" DevicePath \"\"" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.425223 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7f0d100-fdaf-4f35-983b-7ffc397bcc21-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.698543 5028 generic.go:334] "Generic (PLEG): container finished" podID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerID="c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c" exitCode=0 Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.698592 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerDied","Data":"c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c"} Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.698625 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wzk9c" event={"ID":"a7f0d100-fdaf-4f35-983b-7ffc397bcc21","Type":"ContainerDied","Data":"01588efae702c85fa9ffc2f7e587978a941e857585a7c521db2739f3263ada00"} Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.698623 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wzk9c" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.698676 5028 scope.go:117] "RemoveContainer" containerID="c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.746531 5028 scope.go:117] "RemoveContainer" containerID="479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.756506 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wzk9c"] Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.768804 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wzk9c"] Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.786223 5028 scope.go:117] "RemoveContainer" containerID="0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.828549 5028 scope.go:117] "RemoveContainer" containerID="c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c" Nov 23 10:23:04 crc kubenswrapper[5028]: E1123 10:23:04.829536 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c\": container with ID starting with c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c not found: ID does not exist" containerID="c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.829585 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c"} err="failed to get container status \"c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c\": rpc error: code = NotFound desc = could not find container \"c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c\": container with ID starting with c44be66b84b3ea3597827c7b77b3ce5972ba129ace713a615ef4799db14bb07c not found: ID does not exist" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.829613 5028 scope.go:117] "RemoveContainer" containerID="479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892" Nov 23 10:23:04 crc kubenswrapper[5028]: E1123 10:23:04.829862 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892\": container with ID starting with 479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892 not found: ID does not exist" containerID="479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.829893 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892"} err="failed to get container status \"479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892\": rpc error: code = NotFound desc = could not find container \"479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892\": container with ID starting with 479ba44cb13c78c0ed0de34867fd3bbb4e14a07a8159142ea14690018cd6e892 not found: ID does not exist" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.829912 5028 scope.go:117] "RemoveContainer" containerID="0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1" Nov 23 10:23:04 crc kubenswrapper[5028]: E1123 10:23:04.830335 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1\": container with ID starting with 0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1 not found: ID does not exist" containerID="0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1" Nov 23 10:23:04 crc kubenswrapper[5028]: I1123 10:23:04.830364 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1"} err="failed to get container status \"0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1\": rpc error: code = NotFound desc = could not find container \"0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1\": container with ID starting with 0d8f6f135cf7a016c638966a5ce0365264241c91b843207649d14c6798c6e2a1 not found: ID does not exist" Nov 23 10:23:05 crc kubenswrapper[5028]: I1123 10:23:05.073943 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" path="/var/lib/kubelet/pods/a7f0d100-fdaf-4f35-983b-7ffc397bcc21/volumes" Nov 23 10:23:08 crc kubenswrapper[5028]: I1123 10:23:08.053682 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:23:08 crc kubenswrapper[5028]: E1123 10:23:08.054671 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:23:19 crc kubenswrapper[5028]: I1123 10:23:19.054860 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:23:19 crc kubenswrapper[5028]: E1123 10:23:19.056167 5028 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-th92p_openshift-machine-config-operator(aa1c051a-31cd-4dd3-9be8-6194822c2273)\"" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" Nov 23 10:23:31 crc kubenswrapper[5028]: I1123 10:23:31.056901 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:23:32 crc kubenswrapper[5028]: I1123 10:23:32.143711 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"9a06659fad676c39eb7838140efcb7fafd7d705cf6b693f3a8e4d3d07c3bc2db"} Nov 23 10:25:45 crc kubenswrapper[5028]: I1123 10:25:45.653169 5028 generic.go:334] "Generic (PLEG): container finished" podID="194cef00-7b28-406f-a920-f47a965d5f6e" containerID="f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94" exitCode=0 Nov 23 10:25:45 crc kubenswrapper[5028]: I1123 10:25:45.654158 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hmzt4/must-gather-d29vb" event={"ID":"194cef00-7b28-406f-a920-f47a965d5f6e","Type":"ContainerDied","Data":"f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94"} Nov 23 10:25:45 crc kubenswrapper[5028]: I1123 10:25:45.655144 5028 scope.go:117] "RemoveContainer" containerID="f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94" Nov 23 10:25:46 crc kubenswrapper[5028]: I1123 10:25:46.398980 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hmzt4_must-gather-d29vb_194cef00-7b28-406f-a920-f47a965d5f6e/gather/0.log" Nov 23 10:25:57 crc kubenswrapper[5028]: I1123 10:25:57.908346 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hmzt4/must-gather-d29vb"] Nov 23 10:25:57 crc kubenswrapper[5028]: I1123 10:25:57.909470 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hmzt4/must-gather-d29vb" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="copy" containerID="cri-o://1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a" gracePeriod=2 Nov 23 10:25:57 crc kubenswrapper[5028]: I1123 10:25:57.936396 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hmzt4/must-gather-d29vb"] Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.445603 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hmzt4_must-gather-d29vb_194cef00-7b28-406f-a920-f47a965d5f6e/copy/0.log" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.446494 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.526438 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/194cef00-7b28-406f-a920-f47a965d5f6e-must-gather-output\") pod \"194cef00-7b28-406f-a920-f47a965d5f6e\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.527303 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5lqm\" (UniqueName: \"kubernetes.io/projected/194cef00-7b28-406f-a920-f47a965d5f6e-kube-api-access-b5lqm\") pod \"194cef00-7b28-406f-a920-f47a965d5f6e\" (UID: \"194cef00-7b28-406f-a920-f47a965d5f6e\") " Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.551381 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194cef00-7b28-406f-a920-f47a965d5f6e-kube-api-access-b5lqm" (OuterVolumeSpecName: "kube-api-access-b5lqm") pod "194cef00-7b28-406f-a920-f47a965d5f6e" (UID: "194cef00-7b28-406f-a920-f47a965d5f6e"). InnerVolumeSpecName "kube-api-access-b5lqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.634375 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5lqm\" (UniqueName: \"kubernetes.io/projected/194cef00-7b28-406f-a920-f47a965d5f6e-kube-api-access-b5lqm\") on node \"crc\" DevicePath \"\"" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.859027 5028 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hmzt4_must-gather-d29vb_194cef00-7b28-406f-a920-f47a965d5f6e/copy/0.log" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.859593 5028 generic.go:334] "Generic (PLEG): container finished" podID="194cef00-7b28-406f-a920-f47a965d5f6e" containerID="1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a" exitCode=143 Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.859679 5028 scope.go:117] "RemoveContainer" containerID="1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.859997 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hmzt4/must-gather-d29vb" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.889989 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194cef00-7b28-406f-a920-f47a965d5f6e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "194cef00-7b28-406f-a920-f47a965d5f6e" (UID: "194cef00-7b28-406f-a920-f47a965d5f6e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.901837 5028 scope.go:117] "RemoveContainer" containerID="f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.943500 5028 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/194cef00-7b28-406f-a920-f47a965d5f6e-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.968826 5028 scope.go:117] "RemoveContainer" containerID="1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a" Nov 23 10:25:58 crc kubenswrapper[5028]: E1123 10:25:58.969378 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a\": container with ID starting with 1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a not found: ID does not exist" containerID="1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.969430 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a"} err="failed to get container status \"1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a\": rpc error: code = NotFound desc = could not find container \"1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a\": container with ID starting with 1aeb94650823fdb437828092efd90d28c17243b5e690887c6694a08cde43488a not found: ID does not exist" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.969457 5028 scope.go:117] "RemoveContainer" containerID="f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94" Nov 23 10:25:58 crc kubenswrapper[5028]: E1123 10:25:58.970269 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94\": container with ID starting with f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94 not found: ID does not exist" containerID="f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94" Nov 23 10:25:58 crc kubenswrapper[5028]: I1123 10:25:58.970341 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94"} err="failed to get container status \"f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94\": rpc error: code = NotFound desc = could not find container \"f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94\": container with ID starting with f76ec0951e7c26d4ba40cc73b894ddd499a4210aa9b144045e313501c089bc94 not found: ID does not exist" Nov 23 10:25:59 crc kubenswrapper[5028]: I1123 10:25:59.070346 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" path="/var/lib/kubelet/pods/194cef00-7b28-406f-a920-f47a965d5f6e/volumes" Nov 23 10:26:00 crc kubenswrapper[5028]: I1123 10:26:00.946694 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:26:00 crc kubenswrapper[5028]: I1123 10:26:00.947060 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:26:30 crc kubenswrapper[5028]: I1123 10:26:30.946197 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:26:30 crc kubenswrapper[5028]: I1123 10:26:30.947023 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:27:00 crc kubenswrapper[5028]: I1123 10:27:00.947295 5028 patch_prober.go:28] interesting pod/machine-config-daemon-th92p container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 23 10:27:00 crc kubenswrapper[5028]: I1123 10:27:00.948026 5028 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 23 10:27:00 crc kubenswrapper[5028]: I1123 10:27:00.948110 5028 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-th92p" Nov 23 10:27:00 crc kubenswrapper[5028]: I1123 10:27:00.949697 5028 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a06659fad676c39eb7838140efcb7fafd7d705cf6b693f3a8e4d3d07c3bc2db"} pod="openshift-machine-config-operator/machine-config-daemon-th92p" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 23 10:27:00 crc kubenswrapper[5028]: I1123 10:27:00.949837 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-th92p" podUID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerName="machine-config-daemon" containerID="cri-o://9a06659fad676c39eb7838140efcb7fafd7d705cf6b693f3a8e4d3d07c3bc2db" gracePeriod=600 Nov 23 10:27:01 crc kubenswrapper[5028]: I1123 10:27:01.777007 5028 generic.go:334] "Generic (PLEG): container finished" podID="aa1c051a-31cd-4dd3-9be8-6194822c2273" containerID="9a06659fad676c39eb7838140efcb7fafd7d705cf6b693f3a8e4d3d07c3bc2db" exitCode=0 Nov 23 10:27:01 crc kubenswrapper[5028]: I1123 10:27:01.777066 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerDied","Data":"9a06659fad676c39eb7838140efcb7fafd7d705cf6b693f3a8e4d3d07c3bc2db"} Nov 23 10:27:01 crc kubenswrapper[5028]: I1123 10:27:01.777688 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-th92p" event={"ID":"aa1c051a-31cd-4dd3-9be8-6194822c2273","Type":"ContainerStarted","Data":"18f7a618875f8c3bdd966f02f7bf26bf16059bac7095fa31ef05e9466ef0f81b"} Nov 23 10:27:01 crc kubenswrapper[5028]: I1123 10:27:01.777725 5028 scope.go:117] "RemoveContainer" containerID="96aff0f33be43d7d628697e9d03059f544eb53e7d5c7b6d33c1274ad07d51788" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.300139 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t9rjl"] Nov 23 10:27:33 crc kubenswrapper[5028]: E1123 10:27:33.301921 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="extract-utilities" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.301982 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="extract-utilities" Nov 23 10:27:33 crc kubenswrapper[5028]: E1123 10:27:33.302034 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="gather" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302047 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="gather" Nov 23 10:27:33 crc kubenswrapper[5028]: E1123 10:27:33.302087 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="copy" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302108 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="copy" Nov 23 10:27:33 crc kubenswrapper[5028]: E1123 10:27:33.302194 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="extract-content" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302208 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="extract-content" Nov 23 10:27:33 crc kubenswrapper[5028]: E1123 10:27:33.302235 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="registry-server" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302250 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="registry-server" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302621 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f0d100-fdaf-4f35-983b-7ffc397bcc21" containerName="registry-server" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302666 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="gather" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.302692 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="194cef00-7b28-406f-a920-f47a965d5f6e" containerName="copy" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.305872 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.316113 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9rjl"] Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.400366 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9w5k\" (UniqueName: \"kubernetes.io/projected/73dbab21-b412-4ef5-af91-1a4c7498ae72-kube-api-access-p9w5k\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.400446 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-utilities\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.400746 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-catalog-content\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.503846 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9w5k\" (UniqueName: \"kubernetes.io/projected/73dbab21-b412-4ef5-af91-1a4c7498ae72-kube-api-access-p9w5k\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.504089 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-utilities\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.504355 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-catalog-content\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.504978 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-utilities\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.505276 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-catalog-content\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.527062 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9w5k\" (UniqueName: \"kubernetes.io/projected/73dbab21-b412-4ef5-af91-1a4c7498ae72-kube-api-access-p9w5k\") pod \"redhat-marketplace-t9rjl\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:33 crc kubenswrapper[5028]: I1123 10:27:33.641848 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:34 crc kubenswrapper[5028]: I1123 10:27:34.183721 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9rjl"] Nov 23 10:27:34 crc kubenswrapper[5028]: I1123 10:27:34.279637 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9rjl" event={"ID":"73dbab21-b412-4ef5-af91-1a4c7498ae72","Type":"ContainerStarted","Data":"4068967d3fd19c8755c58505114ec33ee4404e35259fa47728a84710ed667fe4"} Nov 23 10:27:35 crc kubenswrapper[5028]: I1123 10:27:35.299804 5028 generic.go:334] "Generic (PLEG): container finished" podID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerID="9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9" exitCode=0 Nov 23 10:27:35 crc kubenswrapper[5028]: I1123 10:27:35.300167 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9rjl" event={"ID":"73dbab21-b412-4ef5-af91-1a4c7498ae72","Type":"ContainerDied","Data":"9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9"} Nov 23 10:27:35 crc kubenswrapper[5028]: I1123 10:27:35.315664 5028 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 23 10:27:37 crc kubenswrapper[5028]: I1123 10:27:37.330766 5028 generic.go:334] "Generic (PLEG): container finished" podID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerID="970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7" exitCode=0 Nov 23 10:27:37 crc kubenswrapper[5028]: I1123 10:27:37.330909 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9rjl" event={"ID":"73dbab21-b412-4ef5-af91-1a4c7498ae72","Type":"ContainerDied","Data":"970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7"} Nov 23 10:27:38 crc kubenswrapper[5028]: I1123 10:27:38.346707 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9rjl" event={"ID":"73dbab21-b412-4ef5-af91-1a4c7498ae72","Type":"ContainerStarted","Data":"d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699"} Nov 23 10:27:38 crc kubenswrapper[5028]: I1123 10:27:38.380207 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t9rjl" podStartSLOduration=2.974638097 podStartE2EDuration="5.380183162s" podCreationTimestamp="2025-11-23 10:27:33 +0000 UTC" firstStartedPulling="2025-11-23 10:27:35.314851537 +0000 UTC m=+13039.012256356" lastFinishedPulling="2025-11-23 10:27:37.720396622 +0000 UTC m=+13041.417801421" observedRunningTime="2025-11-23 10:27:38.36560814 +0000 UTC m=+13042.063012939" watchObservedRunningTime="2025-11-23 10:27:38.380183162 +0000 UTC m=+13042.077587941" Nov 23 10:27:43 crc kubenswrapper[5028]: I1123 10:27:43.642604 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:43 crc kubenswrapper[5028]: I1123 10:27:43.643127 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:43 crc kubenswrapper[5028]: I1123 10:27:43.742030 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:44 crc kubenswrapper[5028]: I1123 10:27:44.523852 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:44 crc kubenswrapper[5028]: I1123 10:27:44.610736 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9rjl"] Nov 23 10:27:46 crc kubenswrapper[5028]: I1123 10:27:46.466920 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t9rjl" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="registry-server" containerID="cri-o://d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699" gracePeriod=2 Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.019297 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.076282 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9w5k\" (UniqueName: \"kubernetes.io/projected/73dbab21-b412-4ef5-af91-1a4c7498ae72-kube-api-access-p9w5k\") pod \"73dbab21-b412-4ef5-af91-1a4c7498ae72\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.076536 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-catalog-content\") pod \"73dbab21-b412-4ef5-af91-1a4c7498ae72\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.076571 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-utilities\") pod \"73dbab21-b412-4ef5-af91-1a4c7498ae72\" (UID: \"73dbab21-b412-4ef5-af91-1a4c7498ae72\") " Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.095977 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-utilities" (OuterVolumeSpecName: "utilities") pod "73dbab21-b412-4ef5-af91-1a4c7498ae72" (UID: "73dbab21-b412-4ef5-af91-1a4c7498ae72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.097394 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dbab21-b412-4ef5-af91-1a4c7498ae72-kube-api-access-p9w5k" (OuterVolumeSpecName: "kube-api-access-p9w5k") pod "73dbab21-b412-4ef5-af91-1a4c7498ae72" (UID: "73dbab21-b412-4ef5-af91-1a4c7498ae72"). InnerVolumeSpecName "kube-api-access-p9w5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.098273 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73dbab21-b412-4ef5-af91-1a4c7498ae72" (UID: "73dbab21-b412-4ef5-af91-1a4c7498ae72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.179663 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9w5k\" (UniqueName: \"kubernetes.io/projected/73dbab21-b412-4ef5-af91-1a4c7498ae72-kube-api-access-p9w5k\") on node \"crc\" DevicePath \"\"" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.179693 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.179705 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73dbab21-b412-4ef5-af91-1a4c7498ae72-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.485517 5028 generic.go:334] "Generic (PLEG): container finished" podID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerID="d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699" exitCode=0 Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.485583 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9rjl" event={"ID":"73dbab21-b412-4ef5-af91-1a4c7498ae72","Type":"ContainerDied","Data":"d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699"} Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.485624 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t9rjl" event={"ID":"73dbab21-b412-4ef5-af91-1a4c7498ae72","Type":"ContainerDied","Data":"4068967d3fd19c8755c58505114ec33ee4404e35259fa47728a84710ed667fe4"} Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.485634 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t9rjl" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.485650 5028 scope.go:117] "RemoveContainer" containerID="d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.526316 5028 scope.go:117] "RemoveContainer" containerID="970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.572901 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9rjl"] Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.577539 5028 scope.go:117] "RemoveContainer" containerID="9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.585674 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t9rjl"] Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.641353 5028 scope.go:117] "RemoveContainer" containerID="d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699" Nov 23 10:27:47 crc kubenswrapper[5028]: E1123 10:27:47.642542 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699\": container with ID starting with d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699 not found: ID does not exist" containerID="d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.642601 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699"} err="failed to get container status \"d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699\": rpc error: code = NotFound desc = could not find container \"d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699\": container with ID starting with d6a0c745cdf19918f92d35fdeaff9d0a8bd96033ba30b99e7dda6efd1c299699 not found: ID does not exist" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.642639 5028 scope.go:117] "RemoveContainer" containerID="970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7" Nov 23 10:27:47 crc kubenswrapper[5028]: E1123 10:27:47.642970 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7\": container with ID starting with 970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7 not found: ID does not exist" containerID="970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.643048 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7"} err="failed to get container status \"970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7\": rpc error: code = NotFound desc = could not find container \"970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7\": container with ID starting with 970e38a4f2cd0692a5a1c40e5a9ae6af7f4e9942503f9f3b811218470d6669f7 not found: ID does not exist" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.643123 5028 scope.go:117] "RemoveContainer" containerID="9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9" Nov 23 10:27:47 crc kubenswrapper[5028]: E1123 10:27:47.643501 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9\": container with ID starting with 9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9 not found: ID does not exist" containerID="9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9" Nov 23 10:27:47 crc kubenswrapper[5028]: I1123 10:27:47.643565 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9"} err="failed to get container status \"9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9\": rpc error: code = NotFound desc = could not find container \"9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9\": container with ID starting with 9a089078d74b74a9a73abf1f11bdcf3f3c3a90f737dbba8307e8d56f245fe9a9 not found: ID does not exist" Nov 23 10:27:49 crc kubenswrapper[5028]: I1123 10:27:49.068375 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" path="/var/lib/kubelet/pods/73dbab21-b412-4ef5-af91-1a4c7498ae72/volumes" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.364846 5028 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6k9wk"] Nov 23 10:29:01 crc kubenswrapper[5028]: E1123 10:29:01.366573 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="registry-server" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.366600 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="registry-server" Nov 23 10:29:01 crc kubenswrapper[5028]: E1123 10:29:01.366636 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="extract-utilities" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.366651 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="extract-utilities" Nov 23 10:29:01 crc kubenswrapper[5028]: E1123 10:29:01.366692 5028 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="extract-content" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.366705 5028 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="extract-content" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.367086 5028 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dbab21-b412-4ef5-af91-1a4c7498ae72" containerName="registry-server" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.370267 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.382267 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6k9wk"] Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.494568 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-catalog-content\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.495616 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4hb\" (UniqueName: \"kubernetes.io/projected/f75b75ee-82fc-4a4b-9920-2662652b7104-kube-api-access-qc4hb\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.495711 5028 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-utilities\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.597305 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-catalog-content\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.597422 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4hb\" (UniqueName: \"kubernetes.io/projected/f75b75ee-82fc-4a4b-9920-2662652b7104-kube-api-access-qc4hb\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.597474 5028 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-utilities\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.598026 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-catalog-content\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.598045 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-utilities\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.617846 5028 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4hb\" (UniqueName: \"kubernetes.io/projected/f75b75ee-82fc-4a4b-9920-2662652b7104-kube-api-access-qc4hb\") pod \"community-operators-6k9wk\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:01 crc kubenswrapper[5028]: I1123 10:29:01.727903 5028 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:02 crc kubenswrapper[5028]: I1123 10:29:02.305505 5028 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6k9wk"] Nov 23 10:29:02 crc kubenswrapper[5028]: I1123 10:29:02.634038 5028 generic.go:334] "Generic (PLEG): container finished" podID="f75b75ee-82fc-4a4b-9920-2662652b7104" containerID="fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93" exitCode=0 Nov 23 10:29:02 crc kubenswrapper[5028]: I1123 10:29:02.634131 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k9wk" event={"ID":"f75b75ee-82fc-4a4b-9920-2662652b7104","Type":"ContainerDied","Data":"fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93"} Nov 23 10:29:02 crc kubenswrapper[5028]: I1123 10:29:02.634457 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k9wk" event={"ID":"f75b75ee-82fc-4a4b-9920-2662652b7104","Type":"ContainerStarted","Data":"1917640b4b98ce2ebda7fb4b4116ebe295c1e2f67d11588ab7e94471be55c383"} Nov 23 10:29:04 crc kubenswrapper[5028]: I1123 10:29:04.669908 5028 generic.go:334] "Generic (PLEG): container finished" podID="f75b75ee-82fc-4a4b-9920-2662652b7104" containerID="5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5" exitCode=0 Nov 23 10:29:04 crc kubenswrapper[5028]: I1123 10:29:04.670016 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k9wk" event={"ID":"f75b75ee-82fc-4a4b-9920-2662652b7104","Type":"ContainerDied","Data":"5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5"} Nov 23 10:29:05 crc kubenswrapper[5028]: I1123 10:29:05.691525 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k9wk" event={"ID":"f75b75ee-82fc-4a4b-9920-2662652b7104","Type":"ContainerStarted","Data":"9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9"} Nov 23 10:29:05 crc kubenswrapper[5028]: I1123 10:29:05.725249 5028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6k9wk" podStartSLOduration=2.214447371 podStartE2EDuration="4.725215106s" podCreationTimestamp="2025-11-23 10:29:01 +0000 UTC" firstStartedPulling="2025-11-23 10:29:02.636717875 +0000 UTC m=+13126.334122654" lastFinishedPulling="2025-11-23 10:29:05.1474856 +0000 UTC m=+13128.844890389" observedRunningTime="2025-11-23 10:29:05.721667078 +0000 UTC m=+13129.419071867" watchObservedRunningTime="2025-11-23 10:29:05.725215106 +0000 UTC m=+13129.422619935" Nov 23 10:29:11 crc kubenswrapper[5028]: I1123 10:29:11.729231 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:11 crc kubenswrapper[5028]: I1123 10:29:11.730015 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:11 crc kubenswrapper[5028]: I1123 10:29:11.817025 5028 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:11 crc kubenswrapper[5028]: I1123 10:29:11.911631 5028 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:12 crc kubenswrapper[5028]: I1123 10:29:12.086204 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6k9wk"] Nov 23 10:29:13 crc kubenswrapper[5028]: I1123 10:29:13.826154 5028 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6k9wk" podUID="f75b75ee-82fc-4a4b-9920-2662652b7104" containerName="registry-server" containerID="cri-o://9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9" gracePeriod=2 Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.334598 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.492320 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-catalog-content\") pod \"f75b75ee-82fc-4a4b-9920-2662652b7104\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.492929 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc4hb\" (UniqueName: \"kubernetes.io/projected/f75b75ee-82fc-4a4b-9920-2662652b7104-kube-api-access-qc4hb\") pod \"f75b75ee-82fc-4a4b-9920-2662652b7104\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.493007 5028 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-utilities\") pod \"f75b75ee-82fc-4a4b-9920-2662652b7104\" (UID: \"f75b75ee-82fc-4a4b-9920-2662652b7104\") " Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.493903 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-utilities" (OuterVolumeSpecName: "utilities") pod "f75b75ee-82fc-4a4b-9920-2662652b7104" (UID: "f75b75ee-82fc-4a4b-9920-2662652b7104"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.494140 5028 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-utilities\") on node \"crc\" DevicePath \"\"" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.503757 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75b75ee-82fc-4a4b-9920-2662652b7104-kube-api-access-qc4hb" (OuterVolumeSpecName: "kube-api-access-qc4hb") pod "f75b75ee-82fc-4a4b-9920-2662652b7104" (UID: "f75b75ee-82fc-4a4b-9920-2662652b7104"). InnerVolumeSpecName "kube-api-access-qc4hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.571446 5028 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f75b75ee-82fc-4a4b-9920-2662652b7104" (UID: "f75b75ee-82fc-4a4b-9920-2662652b7104"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.596541 5028 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc4hb\" (UniqueName: \"kubernetes.io/projected/f75b75ee-82fc-4a4b-9920-2662652b7104-kube-api-access-qc4hb\") on node \"crc\" DevicePath \"\"" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.596583 5028 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f75b75ee-82fc-4a4b-9920-2662652b7104-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.850575 5028 generic.go:334] "Generic (PLEG): container finished" podID="f75b75ee-82fc-4a4b-9920-2662652b7104" containerID="9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9" exitCode=0 Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.850637 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k9wk" event={"ID":"f75b75ee-82fc-4a4b-9920-2662652b7104","Type":"ContainerDied","Data":"9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9"} Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.850678 5028 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6k9wk" event={"ID":"f75b75ee-82fc-4a4b-9920-2662652b7104","Type":"ContainerDied","Data":"1917640b4b98ce2ebda7fb4b4116ebe295c1e2f67d11588ab7e94471be55c383"} Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.850703 5028 scope.go:117] "RemoveContainer" containerID="9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.850708 5028 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6k9wk" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.897281 5028 scope.go:117] "RemoveContainer" containerID="5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.911142 5028 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6k9wk"] Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.933554 5028 scope.go:117] "RemoveContainer" containerID="fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.933720 5028 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6k9wk"] Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.994016 5028 scope.go:117] "RemoveContainer" containerID="9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9" Nov 23 10:29:14 crc kubenswrapper[5028]: E1123 10:29:14.994496 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9\": container with ID starting with 9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9 not found: ID does not exist" containerID="9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.994572 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9"} err="failed to get container status \"9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9\": rpc error: code = NotFound desc = could not find container \"9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9\": container with ID starting with 9fe2f59a6ad8a9924efaccfc0aecacac0c376a8edee9719133bfcbb99547abb9 not found: ID does not exist" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.994638 5028 scope.go:117] "RemoveContainer" containerID="5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5" Nov 23 10:29:14 crc kubenswrapper[5028]: E1123 10:29:14.995030 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5\": container with ID starting with 5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5 not found: ID does not exist" containerID="5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.995073 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5"} err="failed to get container status \"5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5\": rpc error: code = NotFound desc = could not find container \"5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5\": container with ID starting with 5b65f6996e1d529da6a2527c725bb1f8a77143dc4fd91046ddb9e9117e2e34c5 not found: ID does not exist" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.995102 5028 scope.go:117] "RemoveContainer" containerID="fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93" Nov 23 10:29:14 crc kubenswrapper[5028]: E1123 10:29:14.995409 5028 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93\": container with ID starting with fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93 not found: ID does not exist" containerID="fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93" Nov 23 10:29:14 crc kubenswrapper[5028]: I1123 10:29:14.995466 5028 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93"} err="failed to get container status \"fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93\": rpc error: code = NotFound desc = could not find container \"fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93\": container with ID starting with fd496daa5b8efebdd1afce97a68a672b13600ad211bb140f7cd407a20ee93c93 not found: ID does not exist" Nov 23 10:29:15 crc kubenswrapper[5028]: I1123 10:29:15.070638 5028 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75b75ee-82fc-4a4b-9920-2662652b7104" path="/var/lib/kubelet/pods/f75b75ee-82fc-4a4b-9920-2662652b7104/volumes"